As a closure I would like to thank all people who contributed with
their knowledge in my problem although the final decision was not to try
any sort of recovery since the effort required would have been
tremendous with unambiguous results (to say at least).
Jason, Ilya, Brad, David, George,
On Wed, Aug 10, 2016 at 10:55 AM, Ilya Dryomov wrote:
> I think Jason meant to write "rbd_id." here.
Whoops -- thanks for the typo correction.
--
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-user
On Mon, Aug 8, 2016 at 11:47 PM, Jason Dillaman wrote:
> On Mon, Aug 8, 2016 at 5:39 PM, Jason Dillaman wrote:
>> Unfortunately, for v2 RBD images, this image name to image id mapping
>> is stored in the LevelDB database within the OSDs and I don't know,
>> offhand, how to attempt to recover dele
The image's associated metadata is removed from the directory once the
image is removed. Also, the default librbd log level will not log an
image's internal id. Therefore, unfortunately, the only way to
proceed is how I previously described.
On Wed, Aug 10, 2016 at 2:48 AM, Brad Hubbard wrote:
On Wed, Aug 10, 2016 at 3:16 PM, Georgios Dimitrakakis
wrote:
>
> Hello!
>
> Brad,
>
> is that possible from the default logging or verbose one is needed??
>
> I 've managed to get the UUID of the deleted volume from OpenStack but don't
> really know how to get the offsets and OSD maps since "r
Hello!
Brad,
is that possible from the default logging or verbose one is needed??
I 've managed to get the UUID of the deleted volume from OpenStack but
don't really know how to get the offsets and OSD maps since "rbd info"
doesn't provide any information for that volume.
Is it possible to
On Tue, Aug 9, 2016 at 7:39 AM, George Mihaiescu wrote:
> Look in the cinder db, the volumes table to find the Uuid of the deleted
> volume.
You could also look through the logs at the time of the delete and I
suspect you should
be able to see how the rbd image was prefixed/named at the time of
On Mon, Aug 8, 2016 at 9:39 PM, Georgios Dimitrakakis
wrote:
> Dear David (and all),
>
> the data are considered very critical therefore all this attempt to
> recover them.
>
> Although the cluster hasn't been fully stopped all users actions have. I
> mean services are running but users are not a
On Mon, Aug 8, 2016 at 5:39 PM, Jason Dillaman wrote:
> Unfortunately, for v2 RBD images, this image name to image id mapping
> is stored in the LevelDB database within the OSDs and I don't know,
> offhand, how to attempt to recover deleted values from there.
Actually, to correct myself, the "rbd
All RBD images use a backing RADOS object to facilitate mapping
between the external image name and the internal image id. For v1
images this object would be named ".rbd" and for v2 images
this object would be named "rbd_id.". You would need to
find this deleted object first in order to start fig
Look in the cinder db, the volumes table to find the Uuid of the deleted
volume.
If you go through yours OSDs and look for the directories for PG index 20, you
might find some fragments from the deleted volume, but it's a long shot...
> On Aug 8, 2016, at 4:39 PM, Georgios Dimitrakakis
> wro
Dear David (and all),
the data are considered very critical therefore all this attempt to
recover them.
Although the cluster hasn't been fully stopped all users actions have.
I mean services are running but users are not able to read/write/delete.
The deleted image was the exact same size o
I don't think there's a way of getting the prefix from the cluster at this
point.
If the deleted image was a similar size to the example you've given, you
will likely have had objects on every OSD. If this data is absolutely
critical you need to stop your cluster immediately or make copies of all
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2
Hi,
On 08.08.2016 10:50, Georgios Dimitrakakis wrote:
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2
That will be down to the pool the rbd was in, the crush rule for that pool
will dictate which osd's store objects. In a standard config that rbd will
likely have objects on every osd in your cluster.
On 8 Aug 2016 9:51 a.m., "Georgios Dimitrakakis"
wrote:
> Hi,
>>
>>
>> On 08.08.2016 09:58, Geor
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and
3MON nodes (2 of them are the OSD nodes as well) all with ceph version
Hi,
On 08.08.2016 09:58, Georgios Dimitrakakis wrote:
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph version
Dear all,
I would like your help with an emergency issue but first let me
describe our environment.
Our environment consists of 2OSD nodes with 10x 2TB HDDs each and 3MON
nodes (2 of them are the OSD nodes as well) all with ceph version 0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
Thi
20 matches
Mail list logo