Got it Robert.
I tired changing the permission of the mount point after Mounting with
cephfs it is not allowing..it is being changed to root/root.
The mount path /mnt/mycephfs..is test:test before mounting or we tried with
nobody nogroup also..but after mounting (mount -t) the permission is
getti
On 20.12.22 10:21, akshay sharma wrote:
With account you mean the user?
If yes, then we are using different user..the mds auth is created with
client.user..
This is the cephx key that is only used when mounting the filesystem.
>
while copying we are login as user- test but locally with
use
On 20.12.22 08:38, akshay sharma wrote:
Now, I'm able to copy files from the same machine.. basically copy file
from home to /mnt/cephfs is working but when copying from remote machine
using SFTP or SCP to /mnt/cephfs is not working.
What account are you using when locally copying files?
What
> Op 20 dec. 2022 om 08:39 heeft akshay sharma het
> volgende geschreven:
>
> Ceph fs Authorize cephfs client.user /.
> Sudo mount -t vm:6789,vm2:6789:/ /mnt/cephfs -o name=user, secret=***
>
> Now, I'm able to copy files from the same machine.. basically copy file
> from home to /mnt/cephfs
Ceph fs Authorize cephfs client.user /.
Sudo mount -t vm:6789,vm2:6789:/ /mnt/cephfs -o name=user, secret=***
Now, I'm able to copy files from the same machine.. basically copy file
from home to /mnt/cephfs is working but when copying from remote machine
using SFTP or SCP to /mnt/cephfs is not wor
On 19/12/2022 21:19, akshay sharma wrote:
Hi All,
I have three Virtual machines with a dedicated disk for ceph, ceph cluster
is up as shown below
user@ubuntu:~/ceph-deploy$ sudo ceph status
cluster:
id: 06a014a8-d166-4add-a21d-24ed52dce5c0
health: HEALTH_WARN
Am 19.12.22 um 14:19 schrieb akshay sharma:
sudo ceph auth get-or-create client.user mon 'allow r' mds 'allow r, allow
rw path=/home/cephfs' osd 'allow rw pool=cephfs_data' -o
/etc/ceph/ceph.client.user.keyring
The path for this command is relative to the root of the CephFS, usually
just /.
Hi Ramana and thank you,
yes, before the MDS's host reboot the filesystem was read+write and the
cluster was just fine too. We haven't made any upgrade since the cluster
has been installed.
Some times ago i had to rebuild 6 OSDs, due to start failure at boot
time. No more troubles since.
_ What
On Fri, Nov 4, 2022 at 9:36 AM Galzin Rémi wrote:
>
>
> Hi,
> i'm looking for some help/ideas/advices in order to solve the problem
> that occurs on my metadata
> server after the server reboot.
You rebooted a MDS's host and your file system became read-only? Was
the Ceph cluster healthy before r
Nevermind, it works now. Thanks for the help.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
With that first command, I get this error:
Error EINVAL: pool 'cephfs_metadata' already contains some objects. Use an
empty pool instead.
What can I do?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@
Alright, I didn't realize that the MDS was affected by this as well.
In that case there's probably no other way than running the 'ceph fs
new ...' command as Yan, Zheng suggested.
Do you have backups of your cephfs contents in case that goes wrong?
I'm not sure if a pool copy would help in any
Both the MDS maps and the keyrings are lost as a side effect of the monitor
recovery process I mentioned in my initial email, detailed here
https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
.
On Mon, 31 Aug 2020 at 21:10, Eugen Block wrote:
> I do
I don’t understand, what happened to the previous MDS? If there are
cephfs pools there also was an old MDS, right? Can you explain that
please?
Zitat von cyclic3@gmail.com:
I added an MDS, but there was no change in either output (apart from
recognising the existence of an MDS)
_
This sounds rather risky; will this definitely not lose any of my data?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I added an MDS, but there was no change in either output (apart from
recognising the existence of an MDS)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Sun, Aug 30, 2020 at 8:05 PM wrote:
>
> Hi,
> I've had a complete monitor failure, which I have recovered from with the
> steps here:
> https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
> The data and metadata pools are there and are completely
There’s no MDS running, can you start it?
Zitat von cyclic3@gmail.com:
My ceph -s output is this:
cluster:
id: bfe08dcf-aabd-4cac-ac4f-9e56af3df11b
health: HEALTH_ERR
1/3 mons down, quorum omicron-m1,omicron-m2
6 scrub errors
Possible data d
My ceph -s output is this:
cluster:
id: bfe08dcf-aabd-4cac-ac4f-9e56af3df11b
health: HEALTH_ERR
1/3 mons down, quorum omicron-m1,omicron-m2
6 scrub errors
Possible data damage: 1 pg inconsistent
Degraded data redundancy: 626702/20558920
Hi,
how exactly does ceph report that there’s no CephFS? If your MONs were
down and you recovered them, is at least one MGR also up and running?
Can you share ‚ceph -s‘ and ‚ceph fs status’?
Zitat von cyclic3@gmail.com:
Hi,
I've had a complete monitor failure, which I have recovered fro
20 matches
Mail list logo