on db size
increased drastically.
We have 14.2.11, 10 OSD @ 2TB and cephfs in use.
Is this a known issue? Should we avoid noout?
TIA,
derjohn
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax:
_
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le
so tried doing a 'ceph pg
>>>> force-recovery' on
>>>> the affected PGs, but only one seems to have been tagged accordingly
>>>> (see ceph -s output below).
>>>>
>>>> The guide also says "Sometimes it simply takes some t
ng don't know why. Disk itself is capable to deliver well
>> above 50 KIOPS. Difference is magnitude. Any info is more welcome.
>> Daniel Mezentsev, founder
>> (+1) 604 313 8592.
>> Soleks Data Group.
>> Shaping the clouds.
>> ___
>> ceph-users ma
.
>
> --
> Cheers,
> Alwin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Gesc
fannes, Fabian wrote:
> failed: (22) Invalid argument
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net
Facebook: https://www.facebook.com/netlabdotnet
Twi
; _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HR
s anyone have any best practices for it? Thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaefts
reasonably sized).
I might be totally wrong, though. If you just do it, because you don't
want to re-create (or modify) the OSDs, it's not worth the effort IMHO.
rgds,
derjohn
On 02.03.21 10:48, Norman.Kern wrote:
> On 2021/3/2 上午5:09, Andreas John wrote:
>> Hallo,
>>
1Gb/10s so I shut them down again.
>>
>> Any idea what is going on? Or how can I shrik back down the db?
>>
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-use
ph cluster?
> Does Proxmox support snapshots, backups and thin provisioning with RBD-
> VM images?
>
> Regards,
>
> Renne
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ce
Hello,
https://docs.ceph.com/en/latest/rados/operations/erasure-code/
but, you could probably manually intervent, if you want an erasure coded
pool.
rgds,
j.
On 22.09.20 14:55, René Bartsch wrote:
> Am Dienstag, den 22.09.2020, 14:43 +0200 schrieb Andreas John:
>> Hello,
>>
Hello,
On 22.09.20 20:45, Nico Schottelius wrote:
> Hello,
>
> after having moved 4 ssds to another host (+ the ceph tell hanging issue
> - see previous mail), we ran into 241 unknown pgs:
You mean, that you re-seated the OSDs into another chassis/host? Is the
crush map aware about that?
I didn'
Hey Nico,
maybe you "pinned" the IP of the OSDs in question in ceph.conf to the IP
of the old chassis?
Good Luck,
derjohn
P.S. < 100MB/sec is a terrible performance for recovery with 85 OSDs.
Is it rotational on 1 GBit/sec network? You could set ceph osd set
nodeep-scrub to prevent too much
On 22.09.20 22:09, Nico Schottelius wrote:
[...]
> All nodes are connected with 2x 10 Gbit/s bonded/LACP, so I'd expect at
> least a couple of hundred MB/s network bandwidth per OSD.
>
> On one server I just restarted the OSDs and now the read performance
> dropped down to 1-4 MB/s per OSD with be
;s
> not clear to me if this can only move a WAL device or if it can be
> used to remove it ...
>
> Regards,
> Michael
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le..
Hello,
in my cluster one after the other OSD dies until I recognized that it
was simply an "abort" in the daemon caused probably by
2020-01-31 15:54:42.535930 7faf8f716700 -1 log_channel(cluster) log
[ERR] : trim_object Snap 29c44 not in clones
Close to this msg I get a stracktrace:
ceph ver
correctly that in PG 7.374 there is with rbd prefix
59cb9c679e2a9e3 an object that ends with ..3096, which has a snap ID
29c44 ... ? What does the part A29AAB74__7 ?
I was nit able to find in docs how the directory / filename is structured.
Best Regrads,
j.
On 31.01.20 16:04, Andreas J
:20, Andreas John wrote:
> Hello,
>
> for those sumbling upon a similar issue: I was able to mitigate the
> issue, by setting
>
>
> === 8< ===
>
> [osd.14]
> osd_pg_max_concurrent_snap_trims = 0
>
> =
>
>
> in ceph.conf. You don't need to re
Helllo,
answering to myself in case some else sutmbles upon this thread in the
future. I was able to remove the unexpected snap, here is the recipe:
How to remove the unexpected snapshots:
1.) Stop the OSD
ceph-osd -i 14 --flush-journal
... flushed journal /var/lib/ceph/osd/ceph-14/journal fo
;> OS: Centos7
>> Ceph: 10.2.5
>>
>> Hi, everyone
>>
>> The cluster is used for VM image storage and object storage.
>> And I have a bucket which has more than 20 million objects.
>>
>> Now, I have a problem that cluster blocks operation.
>>
21 matches
Mail list logo