Hello Ceph user list!
I tried to update Ceph 15.2.10 to 16.2.0 via ceph orch. In the
beginning everything seems to work fine and the new MGR and MONs where
deployed. But now I enden up in a pulling loop and I am unable to fix
the issue by my self.
#ceph -W cephadm --watch-debu
2021-04-02T10:36:
Hello Sage,Your welcome and I'm happy to help. Thanks for your effort to make ceph even better.Alex Am 03.04.2021 15:21 schrieb Sage Weil :I have a proposed fix for this here: https://github.com/ceph/ceph/pull/40577Unfortunately, this won't help you until it is tested and merged andincluded in 16.2
Hello Konstantin,
In my experience the CephFS kernel driver (Ubuntu 20.04) was always
faster and the CPU load was much lower compared to vfs_ceph.
Alex
Am Mittwoch, dem 14.04.2021 um 10:19 +0300 schrieb Konstantin Shalygin:
> Hi,
>
> Actually vfs_ceph should perform better, but this method wil
Hello Ceph list!
Is there a known problem with NFS4 ACLs and CephFS? After I have set the
permissions with a Windows Samba client
everything seems to be fine. But if I try:
$getfattr -n security.NTACL -d /xyz
/xyz: security.NTACL: No such attribute
CephFS (Ceph 16.2.5) is mountet via the Ub
Hello Harry!
Is the work around still working for you? Until now I didn't found an permanent
fix. After a few days the "deployment"
starts again.
Best,
Alex
Am Sonntag, dem 11.07.2021 um 19:58 + schrieb Robert W. Eckert:
> I had the same issue for Prometheus and Grafana, the same work
Hello list!
We have a containerized Pacific (16.2.5) Cluster running CentOS 8.4 and after a
few weeks the OSDs start to use swap
quite a lot despite free memory. The host has 196 GB of memory and 24 OSDs.
"OSD Memory Target" is set to 6 GB.
$ cat /proc/meminfo
MemTotal: 196426616 kB
M
/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-tunables
>
>
>
> On 08/16 13:14, Alexander Sporleder wrote:
> > Hello list!
> > We have a containerized Pacific (16.2.5) Cluster running CentOS 8.4 and
> > after a few weeks the OSDs start to use swap
>
; but
> > > you might want to try to tweak the default behavior by lowering the
> > > 'vm.swapiness' of the kernel:
> > >
> > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-tunables
> >
Hallo list,
Is the "EC CLAY code plugin" considered to be production-ready in Pacific or is
it more a technology preview?
Best,
Alex
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Frank,
I had similar problems.
https://www.mail-archive.com/ceph-users@ceph.io/msg11772.html
I disabled Swap and now everything is fine.
Best,
Alex
Am Dienstag, dem 01.03.2022 um 08:28 + schrieb Frank Schilder:
> Dear all,
>
> there is a new development on this old case. After e
Hello Kotresh,
We have the same problem quite frequently for a few month now with Ceph 16.2.7.
For us the only thing that helps is a
reboot of the MDS/client or the warning might disappears after a few days by
itself. Its a Ubuntu kernel (5.13) client.
Best,
Alex
Am Mittwoch, dem 06.07.
11 matches
Mail list logo