[ceph-users] Massive Mon DB Size with noout on 14.2.11

2020-10-02 Thread Andreas John
on db size increased drastically. We have 14.2.11, 10 OSD @ 2TB and cephfs in use. Is this a known issue? Should we avoid noout? TIA, derjohn -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832 Tel: +49 69 8570033-1 | Fax:

[ceph-users] Re: Massive Mon DB Size with noout on 14.2.11

2020-10-02 Thread Andreas John
_ >> ceph-users mailing list -- ceph-users@ceph.io >> To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le

[ceph-users] Re: multiple OSD crash, unfound objects

2020-10-10 Thread Andreas John
so tried doing a 'ceph pg >>>> force-recovery' on >>>> the affected PGs, but only one seems to have been tagged accordingly >>>> (see ceph -s output below). >>>> >>>> The guide also says "Sometimes it simply takes some t

[ceph-users] Re: Ceph test cluster, how to estimate performance.

2020-10-13 Thread Andreas John
ng don't know why. Disk itself is capable to deliver well >> above 50 KIOPS. Difference is magnitude. Any info is more welcome. >> Daniel Mezentsev, founder >> (+1) 604 313 8592. >> Soleks Data Group. >> Shaping the clouds. >> ___ >> ceph-users ma

[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Andreas John
. > > -- > Cheers, > Alwin > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Gesc

[ceph-users] Re: How to reset an OSD

2021-01-13 Thread Andreas John
fannes, Fabian wrote: > failed: (22) Invalid argument -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832 Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net Facebook: https://www.facebook.com/netlabdotnet Twi

[ceph-users] Re: 10G stackabe lacp switches

2021-02-16 Thread Andreas John
; _______ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehrer: Andreas John | AG Offenbach, HR

[ceph-users] Best practices for OSD on bcache

2021-03-01 Thread Andreas John
s anyone have any best practices for it?  Thanks. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaefts

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Andreas John
reasonably sized). I might be totally wrong, though. If you just do it, because you don't want to re-create (or modify)  the OSDs, it's not worth the effort IMHO. rgds, derjohn On 02.03.21 10:48, Norman.Kern wrote: > On 2021/3/2 上午5:09, Andreas John wrote: >> Hallo, >>

[ceph-users] Re: mon db growing. over 500Gb

2021-03-11 Thread Andreas John
1Gb/10s so I shut them down again. >> >> Any idea what is going on? Or how can I shrik back down the db? >> >> >> >> ___ >> ceph-users mailing list -- ceph-users@ceph.io >> To unsubscribe send an email to ceph-use

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-22 Thread Andreas John
ph cluster? > Does Proxmox support snapshots, backups and thin provisioning with RBD- > VM images? > > Regards, > > Renne > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ce

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-22 Thread Andreas John
Hello, https://docs.ceph.com/en/latest/rados/operations/erasure-code/ but, you could probably manually intervent, if you want an erasure coded pool. rgds, j. On 22.09.20 14:55, René Bartsch wrote: > Am Dienstag, den 22.09.2020, 14:43 +0200 schrieb Andreas John: >> Hello, >>

[ceph-users] Re: Unknown PGs after osd move

2020-09-22 Thread Andreas John
Hello, On 22.09.20 20:45, Nico Schottelius wrote: > Hello, > > after having moved 4 ssds to another host (+ the ceph tell hanging issue > - see previous mail), we ran into 241 unknown pgs: You mean, that you re-seated the OSDs into another chassis/host? Is the crush map aware about that? I didn'

[ceph-users] Re: Unknown PGs after osd move

2020-09-22 Thread Andreas John
Hey Nico, maybe you "pinned" the IP of the OSDs in question in ceph.conf to the IP of the old chassis? Good Luck, derjohn P.S.  < 100MB/sec is a terrible performance for recovery with 85 OSDs. Is it rotational on 1 GBit/sec network? You could set ceph osd set nodeep-scrub to prevent too much

[ceph-users] Re: Unknown PGs after osd move

2020-09-22 Thread Andreas John
On 22.09.20 22:09, Nico Schottelius wrote: [...] > All nodes are connected with 2x 10 Gbit/s bonded/LACP, so I'd expect at > least a couple of hundred MB/s network bandwidth per OSD. > > On one server I just restarted the OSDs and now the read performance > dropped down to 1-4 MB/s per OSD with be

[ceph-users] Re: Remove separate WAL device from OSD

2020-09-22 Thread Andreas John
;s > not clear to me if this can only move a WAL device or if it can be > used to remove it ... > > Regards, > Michael > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le..

[ceph-users] Getting rid of trim_object Snap .... not in clones

2020-01-31 Thread Andreas John
Hello, in my cluster one after the other OSD dies until I recognized that it was simply an "abort" in the daemon caused probably by 2020-01-31 15:54:42.535930 7faf8f716700 -1 log_channel(cluster) log [ERR] : trim_object Snap 29c44 not in clones Close to this msg I get a stracktrace:  ceph ver

[ceph-users] Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
correctly that in PG 7.374 there is with rbd prefix 59cb9c679e2a9e3 an object that ends with ..3096, which has a snap ID 29c44 ... ? What does the part A29AAB74__7 ? I was nit able to find in docs how the directory / filename is structured. Best Regrads, j. On 31.01.20 16:04, Andreas J

[ceph-users] Re: Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
:20, Andreas John wrote: > Hello, > > for those sumbling upon a similar issue: I was able to mitigate the > issue, by setting > > > === 8< === > > [osd.14] > osd_pg_max_concurrent_snap_trims = 0 > > = > > > in ceph.conf. You don't need to re

[ceph-users] Re: Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
Helllo, answering to myself in case some else sutmbles upon this thread in the future. I was able to remove the unexpected snap, here is the recipe: How to remove the unexpected snapshots: 1.) Stop the OSD ceph-osd -i 14 --flush-journal  ...  flushed journal /var/lib/ceph/osd/ceph-14/journal fo

[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread Andreas John
;> OS: Centos7 >> Ceph: 10.2.5 >> >> Hi, everyone >> >> The cluster is used for VM image storage and object storage. >> And I have a bucket which has more than 20 million objects. >> >> Now, I have a problem that cluster blocks operation. >>