On 20/01/2019 05.50, Brian Topping wrote:
> My main constraint is I had four disks on a single machine to start with
> and any one of the disks should be able to fail without affecting the
> ability for the machine to boot, the bad disk replaced without requiring
> obscure admin skills, and the fin
> On Jan 18, 2019, at 10:58 AM, Hector Martin wrote:
>
> Just to add a related experience: you still need 1.0 metadata (that's
> the 1.x variant at the end of the partition, like 0.9.0) for an
> mdadm-backed EFI system partition if you boot using UEFI. This generally
> works well, except on some
On 19/01/2019 02.24, Brian Topping wrote:
>
>
>> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote:
>>
>> On 12/01/2019 15:07, Brian Topping wrote:
>>> I’m a little nervous that BlueStore assumes it owns the partition table and
>>> will not be happy that a couple of primary partitions have been
> On Jan 18, 2019, at 4:29 AM, Hector Martin wrote:
>
> On 12/01/2019 15:07, Brian Topping wrote:
>> I’m a little nervous that BlueStore assumes it owns the partition table and
>> will not be happy that a couple of primary partitions have been used. Will
>> this be a problem?
>
> You should
On 12/01/2019 15:07, Brian Topping wrote:
I’m a little nervous that BlueStore assumes it owns the partition table and
will not be happy that a couple of primary partitions have been used. Will this
be a problem?
You should look into using ceph-volume in LVM mode. This will allow you
to creat
If you have the chance maybe the best choice is try booting OS from
network. I mean you don't need an extra hd for the OS. Actually I'm trying
to make a squashfs image which is booted over LAN via iPXE. This is a very
good example: https://croit.io/features/efficiency-diskless
El sáb., 12 ene. 201