Re: [ceph-users] Samsung 983 NVMe M.2 - experiences?

2019-03-30 Thread Paul Emmerich
This is *probably* the NVMe version of the 883 which performs very well.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sat, Mar 30, 2019 at 7:55 AM Fabian Figueredo
 wrote:
>
> Hello,
> I'm in the process of building a new ceph cluster, this time around i
> was considering going with nvme ssd drives.
> In searching for something in the line of 1TB per ssd drive, i found
> "Samsung 983 DCT 960GB NVMe M.2 Enterprise SSD for Business".
>
> More info: 
> https://www.samsung.com/us/business/products/computing/ssd/enterprise/983-dct-960gb-mz-1lb960ne/
>
> The idea is buy 10 units.
>
> Anyone have any thoughts/experiences with this drives?
>
> Thanks,
> Fabian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recommended fs to use with rbd

2019-03-30 Thread Paul Emmerich
The only important thing is to enable discard/trim on the file system.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Mar 29, 2019 at 4:42 PM Marc Roos  wrote:
>
>
> I would like to use rbd image from replicated hdd pool in a libvirt/kvm
> vm.
>
> 1. What is the best filesystem to use with rbd, just standaard xfs?
> 2. Is there a recommended tuning for lvm on how to put multiple rbd
> images?
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph MDS laggy

2019-03-30 Thread Mark Schouten

Just a 'this worked'-message. I have not seen the broken behaviour after my 
upgrade last monday. Thanks,

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Original Message -


From: Mark Schouten (m...@tuxis.nl)
Date: 25-03-2019 12:51
To: Yan, Zheng (uker...@gmail.com)
Cc: Ceph Users (ceph-users@lists.ceph.com)
Subject: Re: [ceph-users] Ceph MDS laggy


On Mon, Mar 25, 2019 at 07:13:20PM +0800, Yan, Zheng wrote:
> Yes. the fix is in 12.2.11

Great, thanks.

--
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph block storage cluster limitations

2019-03-30 Thread Anthony D'Atri
> Hello,
> 
> I wanted to know if there are any max limitations on
> 
> - Max number of Ceph data nodes
> - Max number of OSDs per data node
> - Global max on number of OSDs
> - Any limitations on the size of each drive managed by OSD?
> - Any limitation on number of client nodes?
> - Any limitation on maximum number of RBD volumes that can be created?

I don’t think there any *architectural* limits, but there can be *practical* 
limits.  There are a lot of variables and everyone has a unique situation, but 
some thoughts:

> Max number of Ceph data nodes

May be limited at some extreme by networking.  Don’t cheap out on your switches.

> - Max number of OSDs per data node

People have run at least 72.  Consider RAM required for a given set of drives, 
and that a single host/chassis isn’t a big percentage of your cluster.  Ie., 
don’t have a huge fault domain that will bite you later.  For a production 
cluster at scale I would suggest at least 12 OSD nodes, but this depends on 
lots of variables.  Conventional wisdom is 1GB RAM per 1TB of OSD; in practice 
for a large cluster I would favor somewhat more.  A cluster with, say, 3 nodes 
of 72 OSDs each is going to be in bad way when one fails.

> - Global max on number of OSDs

A cluster with at lest 10800 has existed.

https://indico.cern.ch/event/542464/contributions/2202295/attachments/1289543/1921810/cephday-dan.pdf
https://indico.cern.ch/event/649159/contributions/2761965/attachments/1544385/2423339/hroussea-storage-at-CERN.pdf

The larger a cluster becomes, the more careful attention must be paid to 
topology and tuning.

> Also, any advise on using NVMes for OSD drives?

They rock.  Evaluate your servers carefully:
* Some may route PCI through a multi-mode SAS/SATA HBA
* Watch for PCI bridges or multiplexing
* Pinning, minimize data over QPI links
* Faster vs more cores can squeeze out more performance 

AMD Epyc single-socket systems may be very interesting for NVMe OSD nodes.

> What is the known maximum cluster size that Ceph RBD has been deployed to?

See above.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to force backfill a pg in ceph jewel

2019-03-30 Thread Nikhil R
Team,
Is there a way to force backfill a pg in ceph jewel. I know this is
available in mimic. Is it available in ceph jewel
I tried ceph pg backfill  &ceph pg backfill   but no luck

Any help would be appreciated as we have a prod issue.
in.linkedin.com/in/nikhilravindra
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com