On Thu, May 9, 2019 at 3:21 AM Stolte, Felix wrote:
>
> Thanks for the info Patrick. We are using ceph packages from ubuntu main
> repo, so it will take some weeks until I can do the update. In the meantime
> is there anything I can do manually to decrease the number of caps hold by
> the backu
Thanks for the info Patrick. We are using ceph packages from ubuntu main repo,
so it will take some weeks until I can do the update. In the meantime is there
anything I can do manually to decrease the number of caps hold by the backup
nodes, like flushing the client cache or something like that?
On Wed, May 8, 2019 at 4:10 AM Stolte, Felix wrote:
>
> Hi folks,
>
> we are running a luminous cluster and using the cephfs for fileservices. We
> use Tivoli Storage Manager to backup all data in the ceph filesystem to tape
> for disaster recovery. Backup runs on two dedicated servers, which mo
Hi Paul,
we are using Kernel 4.15.0-47.
Regards
Felix
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
-
-
Fors
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Which kernel are you using on the clients?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Wed, May 8, 2019 at 1:10 PM Stolte, Felix wrote:
>
> Hi folks,
>
> we are
Hi folks,
we are running a luminous cluster and using the cephfs for fileservices. We use
Tivoli Storage Manager to backup all data in the ceph filesystem to tape for
disaster recovery. Backup runs on two dedicated servers, which mounted the
cephfs via kernel mount. In order to complete the Ba
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com