This is *probably* the NVMe version of the 883 which performs very well.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sat, Mar 30, 2019 at 7:55 AM Fabian Figuered
The only important thing is to enable discard/trim on the file system.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Fri, Mar 29, 2019 at 4:42 PM Marc Roos wrote
Just a 'this worked'-message. I have not seen the broken behaviour after my
upgrade last monday. Thanks,
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
- Original Message -
From: Mark Schouten (m...@tuxis.nl)
Date: 25-03-2019 12:51
To: Yan, Zheng (uker...
> Hello,
>
> I wanted to know if there are any max limitations on
>
> - Max number of Ceph data nodes
> - Max number of OSDs per data node
> - Global max on number of OSDs
> - Any limitations on the size of each drive managed by OSD?
> - Any limitation on number of client nodes?
> - Any limitatio
Team,
Is there a way to force backfill a pg in ceph jewel. I know this is
available in mimic. Is it available in ceph jewel
I tried ceph pg backfill &ceph pg backfill but no luck
Any help would be appreciated as we have a prod issue.
in.linkedin.com/in/nikhilravindra
___