> Op 20 dec. 2022 om 08:39 heeft akshay sharma het
> volgende geschreven:
>
> Ceph fs Authorize cephfs client.user /.
> Sudo mount -t vm:6789,vm2:6789:/ /mnt/cephfs -o name=user, secret=***
>
> Now, I'm able to copy files from the same machine.. basically copy file
> from home to /mnt/cephfs
On 20.12.22 08:38, akshay sharma wrote:
Now, I'm able to copy files from the same machine.. basically copy file
from home to /mnt/cephfs is working but when copying from remote machine
using SFTP or SCP to /mnt/cephfs is not working.
What account are you using when locally copying files?
What
On 20.12.22 10:21, akshay sharma wrote:
With account you mean the user?
If yes, then we are using different user..the mds auth is created with
client.user..
This is the cephx key that is only used when mounting the filesystem.
>
while copying we are login as user- test but locally with
use
Hi guys,
i stumbled about these log entries in my active MDS on a pacific (16.2.10)
cluster:
2022-12-20T10:06:52.124+0100 7f11ab408700 0 log_channel(cluster) log [WRN] :
client.1207771517 isn't responding to mclientcaps(revoke), ino 0x10017e84452
pending pAsLsXsFsc issued pAsLsXsFsc, sent 62.
Hi Dan,
thank you for your info. Anything i can do to help to resolve the issue? A
reliable rctime would be really great for backup purposes (even it would need
to fix the time of the files causing the future rctime).
Regards
Felix
---
On 20/12/2022 18:34, Stolte, Felix wrote:
Hi guys,
i stumbled about these log entries in my active MDS on a pacific (16.2.10)
cluster:
2022-12-20T10:06:52.124+0100 7f11ab408700 0 log_channel(cluster) log [WRN] :
client.1207771517 isn't responding to mclientcaps(revoke), ino 0x10017e84452
Hi,
I can't really confirm your observation, I have a test cluster
(running on openSUSE Leap) upgraded from N to Q a few weeks ago
(17.2.3) and this worked fine:
nautilus:~ # ceph auth get-or-create client.cinder mgr 'profile rbd'
mon 'profile rbd' osd 'profile rbd pool=cinder'
nautilus:~
Got it Robert.
I tired changing the permission of the mount point after Mounting with
cephfs it is not allowing..it is being changed to root/root.
The mount path /mnt/mycephfs..is test:test before mounting or we tried with
nobody nogroup also..but after mounting (mount -t) the permission is
getti
Hello.
I have a few issues with my ceph cluster:
- RGWs have disappeared from management (console does not register any
RGWs) despite showing 4 services deployed and processes running;
- All object buckets not accessible / manageable;
- Console showing some of my pools are “updating” – its
thanks Yuri, rgw approved based on today's results from
https://pulpito.ceph.com/yuriw-2022-12-20_15:27:49-rgw-pacific_16.2.11_RC2-distro-default-smithi/
On Mon, Dec 19, 2022 at 12:08 PM Yuri Weinstein wrote:
> If you look at the pacific 16.2.8 QE validation history (
> https://tracker.ceph.com/
On Tue, Dec 20, 2022 at 11:41 AM Casey Bodley wrote:
> thanks Yuri, rgw approved based on today's results from
>
> https://pulpito.ceph.com/yuriw-2022-12-20_15:27:49-rgw-pacific_16.2.11_RC2-distro-default-smithi/
>
> On Mon, Dec 19, 2022 at 12:08 PM Yuri Weinstein
> wrote:
>
> > If you look at t
Hi all,
>From what I understand, after creating an OSD using "ceph-volume lvm create",
>we will do a "ceph-volume lvm activate" so that the systemd is created.
However, I found that after rebooting a host, some OSDs in the host will have
empty /var/lib/ceph/osd/ceph-$osd
And I am not able to r
12 matches
Mail list logo