Am 30.04.18 um 09:26 schrieb Jan Marquardt:
> Am 27.04.18 um 20:48 schrieb David Turner:
>> This old [1] blog post about removing super large RBDs is not relevant
>> if you're using object map on the RBDs, however it's method to manually
>> delete an RBD is still va
followed the instructions under
http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image until
'Remove all rbd data', which seems to be hanging, too.
> On Thu, Apr 26, 2018 at 9:24 AM, Jan Marquardt wrote:
>> Hi,
>>
>> I am currently trying to delete an
.
Until now the only output was:
error removing rbd>rbd_data.221bf2eb141f2.51d2: (2) No such
file or directory
error removing rbd>rbd_data.221bf2eb141f2.e3f2: (2) No such
file or directory
>
> [1] http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-ima
Hi,
I am currently trying to delete an rbd image which is seemingly causing
our OSDs to crash, but it always gets stuck at 3%.
root@ceph4:~# rbd rm noc_tobedeleted
Removing image: 3% complete...
Is there any way to force the deletion? Any other advices?
Best Regards
Jan
___
naps no head for
> 0:ba087b0f:::rbd_data.221bf2eb141f2.1436:46aa (have MIN)
>
> The cluster I've debugged with the same crash also got a lot of snapshot
> problems including this one.
> In the end, only manually marking all snap_ids as deleted in the pool
>
Am 10.04.18 um 20:22 schrieb Paul Emmerich:
> Hi,
>
> I encountered the same crash a few months ago, see
> https://tracker.ceph.com/issues/23030
>
> Can you post the output of
>
> ceph osd pool ls detail -f json-pretty
>
>
> Paul
Yes, of course.
# ceph osd pool ls detail -f json-pretty
[
Am 10.04.18 um 15:29 schrieb Brady Deetz:
> What distribution and kernel are you running?
>
> I recently found my cluster running the 3.10 centos kernel when I
> thought it was running the elrepo kernel. After forcing it to boot
> correctly, my flapping osd issue went away.
We are running on Ubu
Hi,
we are experiencing massive problems with our Ceph setup. After starting
a "repair pg" because of scrub errors OSDs started to crash, which we
could not stop so far. We are running Ceph 12.2.4. Crashed OSDs are both
bluestore and filestore.
Our cluster currently looks like this:
# ceph -s
Hi David,
Am 15.03.18 um 18:03 schrieb David Turner:
> I upgraded a [1] cluster from Jewel 10.2.7 to Luminous 12.2.2 and last
> week I added 2 nodes to the cluster. The backfilling has been
> ATROCIOUS. I have OSDs consistently [2] segfaulting during recovery.
> There's no pattern of which OSDs
Am 05.03.18 um 13:13 schrieb Ronny Aasen:
> i had some similar issues when i started my proof of concept. especialy
> the snapshot deletion i remember well.
>
> the rule of thumb for filestore that i assume you are running is 1GB ram
> per TB of osd. so with 8 x 4TB osd's you are looking at 32GB o
Hi,
we are relatively new to Ceph and are observing some issues, where
I'd like to know how likely they are to happen when operating a
Ceph cluster.
Currently our setup consists of three servers which are acting as
OSDs and MONs. Each server has two Intel Xeon L5420 (yes, I know,
it's not state o
Hi,
sorry for the delay, but in the meantime we were able to find a
workaround. Inspired by this:
> Side note: Configuring the loopback IP on the physical interfaces is
> workable if you set it on **all** parallel links. Example with server1:
>
>
>
> “iface enp3s0f0 inet static
>
> address
ble to use lo.
> Cheers,
>
> Maxime
Regards,
Jan
>
>
> *From: *ceph-users on behalf of
> Richard Hesse
> *Date: *Monday 17 April 2017 22:12
> *To: *Jan Marquardt
> *Cc: *"ceph-users@lists.ceph.com"
> *Subject: *Re: [ceph-users] Ceph with Clos IP fabric
&
o use them
directly for Ceph. What would you suggest instead?
> 3) Are you planning on using RGW at all?
No, there won't be any RGW. It is a plain rbd cluster, which will be
used for backup purposes.
Best Regards
Jan
> On Thu, Apr 13, 2017 at 10:57 AM, Jan Marquardt <mailto:j...@ar
Hi,
I am currently working on Ceph with an underlying Clos IP fabric and I
am hitting some issues.
The setup looks as follows: There are 3 Ceph nodes which are running
OSDs and MONs. Each server has one /32 loopback ip, which it announces
via BGP to its uplink switches. Besides the loopback ip ea
15 matches
Mail list logo