Now I try mount cve-backup again. It have mounted ok now and I copy out all
data from it.
I can't continue use ceph in production now :(
It need very high expirence with ceph for fast detect place of error and
fast repair it.
I try continue use it for data without critical avaliable (for example
Which kernel version are you using on client ?
Status of pgs ?
# uname -a
# ceph pg stat
Laurent
Le 18/09/2013 17:45, Timofey a écrit :
yes, format 1:
rbd info cve-backup | grep format
format: 1
no, about this image:
dmesg | grep rbd
[ 294.355188] rbd: loaded rbd (rados block device)
[
What is return by rbd info ?
Do you see your image in rbd_directory object ?
(replace rbd by the correct pool) :
# rados get -p rbd rbd_directory - | strings
Do you have an object called oldname.rbd or newname.rbd ?
# rados get -p rbd oldname.rbd - | strings
# rados get -p rbd newname.rbd - | st
I use format 1.
Yes I see images, but can't map it.
> Hello Timofey,
>
> You still see your images with "rbd ls"?
> which format (1 or 2) do you use ?
>
>
> Laurent Barbe
>
>
> Le 18/09/2013 08:54, Timofey a écrit :
>> I rename few images when cluster was in degradeted state. Now I can't map
Hello Timofey,
You still see your images with "rbd ls"?
which format (1 or 2) do you use ?
Laurent Barbe
Le 18/09/2013 08:54, Timofey a écrit :
I rename few images when cluster was in degradeted state. Now I can't map one
of them with error:
rbd: add failed: (6) No such device or address
I