Thanks for reporting this issue. It has been fixed by
https://github.com/ceph/ceph/pull/40845 and will be released in the
next pacific point release.
Neha
On Mon, Apr 19, 2021 at 8:19 AM Behzad Khoshbakhti
wrote:
>
> thanks by commenting the ProtectClock directive, the issue is resolved.
> Thank
thanks by commenting the ProtectClock directive, the issue is resolved.
Thanks for the support.
On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer
wrote:
> Hello,
>
> Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service
> should fix the issue
>
>
>
> On Thu, Apr 8, 2021 at 9:49 A
Hello,
Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service should
fix the issue
On Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti
wrote:
> I believe there is some of problem in the systemd as the ceph starts
> successfully by running manually using the ceph-osd command.
>
> O
I believe there is some of problem in the systemd as the ceph starts
successfully by running manually using the ceph-osd command.
On Thu, Apr 8, 2021, 10:32 AM Enrico Kern
wrote:
> I agree. But why does the process start manual without systemd which
> obviously has nothing to do with uid/gid 167
I agree. But why does the process start manual without systemd which
obviously has nothing to do with uid/gid 167 ? It is also not really a fix
to let all users change uid/gids...
On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel wrote:
> Could there be more smooth migration? On my Ubuntu I have the
Could there be more smooth migration? On my Ubuntu I have the same behavior and
my ceph uid/gud are also 64045.
I started with Luminous in 2018 when it was not containerized, and still
continue updating it with apt.
Since when we have got this hardcoded value of 167 ???
Andrew Walker-Brown wrot
running as ceph user and not root.
Following is the startup configuration which can be found via the
https://paste.ubuntu.com/p/2kV8KhrRfV/.
[Unit]
Description=Ceph object storage daemon osd.%i
PartOf=ceph-osd.target
After=network-online.target local-fs.target time-sync.target
Before=remote-fs-pre.
Following is the error:
Apr 4 19:09:39 osd03 systemd[1]: ceph-osd@2.service: Scheduled restart
job, restart counter is at 3.
Apr 4 19:09:39 osd03 systemd[1]: Stopped Ceph object storage daemon osd.2.
Apr 4 19:09:39 osd03 systemd[1]: Starting Ceph object storage daemon
osd.2...
Apr 4 19:09:39 o
And after a reboot what errors are you getting?
Sent from my iPhone
On 4 Apr 2021, at 15:33, Behzad Khoshbakhti wrote:
I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph stora
I have changed the uid and gid to 167, but still no progress.
cat /etc/group | grep -i ceph
ceph:x:167:
root@osd03:~# cat /etc/passwd | grep -i ceph
ceph:x:167:167:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
On Sun, Apr 4, 2021 at 6:47 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> w
UID and guid should both be 167 I believe.
Make a note of the current values and change them to 167 using usermod and
groupmod.
I had just this issue. It’s partly to do with how perms are used within the
containers I think.
I changed the values to 167 in passwd everything worked again. Symptom
Hello,
Permissions are correct. guid/uid is 64045/64045
ls -alh
total 32K
drwxrwxrwt 2 ceph ceph 200 Apr 4 14:11 .
drwxr-xr-x 8 ceph ceph 4.0K Sep 18 2018 ..
lrwxrwxrwx 1 ceph ceph 93 Apr 4 14:11 block -> /dev/...
-rw--- 1 ceph ceph 37 Apr 4 14:11 ceph_fsid
-rw--- 1 ceph ceph 37
Are the file permissions correct and UID/guid in passwd both 167?
Sent from my iPhone
On 4 Apr 2021, at 12:29, Lomayani S. Laizer wrote:
Hello,
+1 Am facing the same problem in ubuntu after upgrade to pacific
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/
Hello,
+1 Am facing the same problem in ubuntu after upgrade to pacific
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 bluestore(/var/lib/ceph/osd/
ceph-29/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-29/block:
(1) Operation not permitted
2021-04-03T10:36:07.698+0300 7f9b8d075f00 -
It worth mentioning as I issue the following command, the Ceph OSD starts
and joins the cluster:
/usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti
wrote:
> Hi all,
>
> As I have upgrade my Ceph cluster from 15.2.10 to 16
15 matches
Mail list logo