Hi all,
I cannot get my luminous 12.2.11 mds servers to start on Debian 9(.8)
unless nscd is also installed.
Trying to start from command line:
# /usr/bin/ceph-mds -f --cluster ceph --id mds02.hep.wisc.edu --setuser
ceph --setgroup ceph unable to look up group 'ceph': (34) Numerical
resu
's refusing to backfill or just hang backfilled yet?
>
>
> Recovery on EC pools requires min_size rather than k shards at this
> time. There were reasons; they weren't great. We're trying to get a fix
> tested and merged at https://github.com/ceph/ceph/pull/1761
Hi all,
Recently our cluster lost a drive and a node (3 drives) at the same
time. Our erasure coded pools are all k2m2, so if all is working
correctly no data is lost.
However, there were 4 PGs that stayed "incomplete" until I finally
took the suggestion in 'ceph health detail' to reduce
Hi all,
I am exporting cephfs using samba. It is much slower over samba than
direct. Anyone know how to speed it up?
Benchmarked using bonnie++ 5 times either directly to cephfs mounted
by kernel (v4.18.6) module:
bonnie++ -> kcephfs
or through a cifs kernel-module-mounted (protocol ve
P.S. kernel 4.18.6
# uname -a
Linux tardis 4.18.0-1-amd64 #1 SMP Debian 4.18.6-1 (2018-09-06) x86_64
GNU/Linux
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
It appears as though the '+' which indicates an extended ACL is not
shown when 'ls'-ing cephfs is mounted by kernel.
# ls -al
total 9
drwxrwxr-x+ 4 root smbadmin4096 Aug 13 10:14 .
drwxrwxr-x 5 root smbadmin4096 Aug 17 09:37 ..
dr-xr-xr-x 4 root root 3 Sep 11 09:50