[ceph-users] Concurrent access to Ceph filesystems

2013-03-01 Thread Karsten Becker
Hi, I'm new to Ceph. I currently find no answer in the official docs for the following question. Can Ceph filesystems be used concurrently by clients, both when accessing via RBD and CephFS? Concurrently means in terms of multiple clients accessing an writing on the same Ceph volume (like it is p

Re: [ceph-users] Concurrent access to Ceph filesystems

2013-03-01 Thread Karsten Becker
Nice. Thanks a lot. Regards from Berlin/Germany Karsten On 03/01/2013 11:13 PM, Gregory Farnum wrote: > CephFS supports this very nicely, though it is of course not yet > production ready for most users. RBD provides block device semantics — > you can mount it from multiple hosts, but if you aren

[ceph-users] Ceph Crush for 2 room setup

2018-02-16 Thread Karsten Becker
Hi. I want to run my Ceph cluster in a 2 datacenter/room setup with pool size/replica 3. But I don't get it done to define the ruleset correctly - or at least I am unsure if it is correct. I have the following setup of my Ceph cluster: > ID CLASS WEIGHT TYPE NAME STA

[ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
Hi. during the reorgainzation of my cluster I removed some OSDs. Obviously something went wrong for 2 of them, osd.19 and osd.20. If I get my current Crush map, decompile and edit them, I see 2 orphaned/stale entries for the former OSDs: > device 16 osd.16 class hdd > device 17 osd.17 class hdd

Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
On 16.02.2018 21:56, David Turner wrote: > What is the output of `ceph osd stat`?  My guess is that they are still > considered to be part of the cluster and going through the process of > removing OSDs from your cluster is what you need to do.  In particular > `ceph osd

Re: [ceph-users] Orphaned entries in Crush map

2018-02-16 Thread Karsten Becker
; considered to be part of the cluster and going through the process of > removing OSDs from your cluster is what you need to do.  In particular > `ceph osd rm 19`. > > On Fri, Feb 16, 2018 at 2:31 PM Karsten Becker > mailto:karsten.bec...@ecologic.eu>> wrote: > >

[ceph-users] Missing clones

2018-02-19 Thread Karsten Becker
Hi, I have one damaged PG in my cluster. All OSDs are BlueStore. How do I fix this? > 2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9 > 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:head expected clone > 10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:64e 1 missing > 201

Re: [ceph-users] Missing clones

2018-02-19 Thread Karsten Becker
ng it with size = 2, do you? > > > Zitat von Karsten Becker : > >> Hi, >> >> I have one damaged PG in my cluster. All OSDs are BlueStore. How do I >> fix this? >> >>> 2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9 >>> 10:9defb021:::r

Re: [ceph-users] Missing clones

2018-02-19 Thread Karsten Becker
t; 12: (_start()+0x2a) [0x55eef35e901a] > Aborted Best Karsten On 19.02.2018 17:09, Eugen Block wrote: > Could [1] be of interest? > Exporting the intact PG and importing it back to the rescpective OSD > sounds promising. > > [1] > http://lists.ceph.com/pipermail/ceph-user

Re: [ceph-users] Missing clones

2018-02-19 Thread Karsten Becker
BTW - how can I find out, which RBDs are affected by this problem. Maybe a copy/remove of the affected RBDs could help? But how to find out to which RBDs this PG belongs to? Best Karsten On 19.02.2018 19:26, Karsten Becker wrote: > Hi. > > Thank you for the tip. I just tri

Re: [ceph-users] Missing clones

2018-02-19 Thread Karsten Becker
maptool --test-map-object image1 --pool 5 /tmp/osdmap > osdmaptool: osdmap file '/tmp/osdmap' >  object 'image1' -> 5.2 -> [0] > > ceph1:~ # osdmaptool --test-map-object image2 --pool 5 /tmp/osdmap > osdmaptool: osdmap file '/tmp/osdmap' >  object &#

Re: [ceph-users] Missing clones

2018-02-20 Thread Karsten Becker
not understand: If I take your approach of finding out what is stored in the PG, I get no match with my PG ID anymore. If I take the approach of "rbd info" which was posted by Mykola Golub, I get a match - unfortunately the most important VM on our system which holds the software for our

Re: [ceph-users] Missing clones

2018-02-20 Thread Karsten Becker
>  - rbd_data.966489238e1f29 >  - rbd_data.e57feb238e1f29 >  - rbd_data.4401c7238e1f29 > > This doesn't make too much sense to me, yet. Which ones are belongig to > your corrupted VM? Do you have a backup of the VM in case the repair fails? > > > Zitat v

Re: [ceph-users] Missing clones

2018-02-21 Thread Karsten Becker
So - here is the feedback. After a long night... The plain copying did not help... it then complains about the Snaps of another VM (also with old Snapshots). I remembered about a thread I read that the problem could solved by converting back to filestore, because you then have access of the data