quot;Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Thursday, 26 May, 2016 11:29:22
> Subject: Re: [ceph-users] Jewel ubuntu release is half cooked
> Hi Andrei,
> Can you share your udev hack that you had to use?
> Currently, i add " /usr/sbin/ceph-disk acti
Hi Andrei,
Can you share your udev hack that you had to use?
Currently, i add "/usr/sbin/ceph-disk activate-all” to /etc/rc.local to
activate all OSDs at boot. After the first reboot after upgrading to jewel, the
journal disks are owned by ceph:ceph. Also, links are created in
/etc/systemd/sys
Hi Anthony,
>
>> 2. Inefficient chown documentation - The documentation states that one should
>> "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd ran as
>> user ceph and not as root. Now, this command would run a chown process one
>> osd
>> at a time. I am considering my c
Hi,
On Mon, May 23, 2016 at 8:24 PM, Anthony D'Atri wrote:
>
>
> Re:
>
>> 2. Inefficient chown documentation - The documentation states that one
>> should "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd
>> ran as user ceph and not as root. Now, this command would run a ch
Re:
> 2. Inefficient chown documentation - The documentation states that one should
> "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd ran as
> user ceph and not as root. Now, this command would run a chown process one
> osd at a time. I am considering my cluster to be a
Hello!
On Mon, May 23, 2016 at 11:26:38AM +0100, andrei wrote:
> 1. Ceph journals - After performing the upgrade the ceph-osd processes are
> not starting. I've followed the instructions and chowned /var/lib/ceph (also
> see point 2 below). The issue relates to the journal partitions, which ar
Hello
I've recently updated my Hammer ceph cluster running on Ubuntu 14.04 LTS
servers and noticed a few issues during the upgrade. Just wanted to share my
experience.
I've installed the latest Jewel release. In my opinion, some of the issues I
came across relate to poor upgrade documentatio