Does't 10+1 mean that one server can go offline without loosing data and 
functionality? We are quite short on hardware and need as much space as 
possible... would 9+1 sound better with one more extra node?
Yes, that is what i see in my test in regard to space. Can min alloc size be 
changed? Anton
-------- Original message --------From: Paul Emmerich <paul.emmer...@croit.io> 
Date: 30/07/2018  17:55  (GMT+02:00) To: Anton Aleksandrov 
<an...@aleksandrov.eu> Cc: Ceph Users <ceph-users@lists.ceph.com> Subject: Re: 
[ceph-users] CephFS configuration for millions of small files 
10+1 is a bad idea for obvious reasons (not enough coding chunks, you will be 
offline if even one server is offline).
The real problem is that your 20kb files will be split up into 2kb chunks and 
the metadata overhead and bluestore min alloc size will eat up your disk space.


Paul

2018-07-30 13:44 GMT+02:00 Anton Aleksandrov <an...@aleksandrov.eu>:
Hello community,



I am building first cluster for project, that hosts millions of small (from 
20kb) and big (up to 10mb) files. Right now we are moving from local 16tb raid 
storage to cluster of 12 small machines.  We are planning to have 11 OSD nodes, 
use erasure coding pool (10+1) and one host for MDS.



On my local tests I see, that available space decrease unproportionally to the 
amount of data copied into cluster. With clean cluster I have, for example 
100gb available space, but after copying 40gb in - size decreases for about 
5-10%. Is that normal?



Is there any term, that would specify cluster's minimal object size?



I also have question if having so many small files (current number is about 
50'000'000 files at least) - could have negative impact and where would be our 
bottleneck? As we don't have money for SSD, we will have WAL/DB on separate 
simple HDD.



Also - would that help to put Metadata pool on separate disks, away from Data 
pool drives for CephFS?



Regards,

Anton.



_______________________________________________

ceph-users mailing list

ceph-users@lists.ceph.com

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to