Hi,
ceph-osd fails because:
#ceph-osd -i 0 --mkfs --mkkey --osd-journal /dev/sde1
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0d 00 00 00
00 20 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00
2016-06-11 15:37:50.776385 7f64c
Hi,
We had experienced the similar error, when writing to RBD block with
multi-threads using fio, some OSD got error and down.
Did we talk about the same stuff?
2016-06-11 0:37 GMT+08:00 Юрий Соколов :
> Good day, all.
>
> I found this issue: https://github.com/ceph/ceph/pull/5991
>
> Did this i
On Tuesday, June 7, 2016, Christian Balzer wrote:
>
> Hello,
>
> you will want to read:
>
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
>
> especially section III and IV.
>
> Another approach w/o editing the CRUSH map is here:
> https://elkano.org/blog/
On Sunday, June 5, 2016, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Let's assume that everything went very very bad and i have to manually
> recover a cluster with an unconfigured ceph.
>
> 1. How can i recover datas directly from raw disks? Is this possible?
There have b