Hi,
I'm new to Ceph. I currently find no answer in the official docs for the
following question.
Can Ceph filesystems be used concurrently by clients, both when
accessing via RBD and CephFS? Concurrently means in terms of multiple
clients accessing an writing on the same Ceph volume (like it is
p
Nice. Thanks a lot.
Regards from Berlin/Germany
Karsten
On 03/01/2013 11:13 PM, Gregory Farnum wrote:
> CephFS supports this very nicely, though it is of course not yet
> production ready for most users. RBD provides block device semantics —
> you can mount it from multiple hosts, but if you aren
Hi.
I want to run my Ceph cluster in a 2 datacenter/room setup with pool
size/replica 3.
But I don't get it done to define the ruleset correctly - or at least I
am unsure if it is correct.
I have the following setup of my Ceph cluster:
> ID CLASS WEIGHT TYPE NAME STA
Hi.
during the reorgainzation of my cluster I removed some OSDs. Obviously
something went wrong for 2 of them, osd.19 and osd.20.
If I get my current Crush map, decompile and edit them, I see 2
orphaned/stale entries for the former OSDs:
> device 16 osd.16 class hdd
> device 17 osd.17 class hdd
On 16.02.2018 21:56, David Turner wrote:
> What is the output of `ceph osd stat`? My guess is that they are still
> considered to be part of the cluster and going through the process of
> removing OSDs from your cluster is what you need to do. In particular
> `ceph osd
; considered to be part of the cluster and going through the process of
> removing OSDs from your cluster is what you need to do. In particular
> `ceph osd rm 19`.
>
> On Fri, Feb 16, 2018 at 2:31 PM Karsten Becker
> mailto:karsten.bec...@ecologic.eu>> wrote:
>
>
Hi,
I have one damaged PG in my cluster. All OSDs are BlueStore. How do I
fix this?
> 2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9
> 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:head expected clone
> 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:64e 1 missing
> 201
ng it with size = 2, do you?
>
>
> Zitat von Karsten Becker :
>
>> Hi,
>>
>> I have one damaged PG in my cluster. All OSDs are BlueStore. How do I
>> fix this?
>>
>>> 2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9
>>> 10:9defb021:::r
t; 12: (_start()+0x2a) [0x55eef35e901a]
> Aborted
Best
Karsten
On 19.02.2018 17:09, Eugen Block wrote:
> Could [1] be of interest?
> Exporting the intact PG and importing it back to the rescpective OSD
> sounds promising.
>
> [1]
> http://lists.ceph.com/pipermail/ceph-user
BTW - how can I find out, which RBDs are affected by this problem. Maybe
a copy/remove of the affected RBDs could help? But how to find out to
which RBDs this PG belongs to?
Best
Karsten
On 19.02.2018 19:26, Karsten Becker wrote:
> Hi.
>
> Thank you for the tip. I just tri
maptool --test-map-object image1 --pool 5 /tmp/osdmap
> osdmaptool: osdmap file '/tmp/osdmap'
> object 'image1' -> 5.2 -> [0]
>
> ceph1:~ # osdmaptool --test-map-object image2 --pool 5 /tmp/osdmap
> osdmaptool: osdmap file '/tmp/osdmap'
> object
not understand: If I take your approach of finding out
what is stored in the PG, I get no match with my PG ID anymore.
If I take the approach of "rbd info" which was posted by Mykola Golub, I
get a match - unfortunately the most important VM on our system which
holds the software for our
> - rbd_data.966489238e1f29
> - rbd_data.e57feb238e1f29
> - rbd_data.4401c7238e1f29
>
> This doesn't make too much sense to me, yet. Which ones are belongig to
> your corrupted VM? Do you have a backup of the VM in case the repair fails?
>
>
> Zitat v
So - here is the feedback. After a long night...
The plain copying did not help... it then complains about the Snaps of
another VM (also with old Snapshots).
I remembered about a thread I read that the problem could solved by
converting back to filestore, because you then have access of the data
14 matches
Mail list logo