Hello,

Brian already mentioned a number very pertinent things, I've got a few
more:

On Tue, 05 Apr 2016 10:48:49 -0400 d...@integrityhost.com wrote:

> In a 12 OSD setup, the following config is there:
> 
>             (OSDs * 100)
> Total PGs = ----------
>               pool size
> 

The PGcalc page at http://ceph.com/pgcalc/ is quite helpful and contains a
lot of background info as well.

As Brian said, you can never decrease PG count, but growing it is also a
very I/O intensive operation and you want to avoid that as much as
possible.

> 
> So with 12 OSD's and a pool size of 2 replicas, this would equal Total 
> PGs of 600 as per this url:
PGcalc with a target of 200 PGs per OSD (doubling of cluster size
expected) gives us 1024, which is also what I would go for myself.

However if this a production cluster and your OSDs are NOT RAID1 or very
very reliable, fast and well monitored SSDs you're basically asking Murphy
to come visit, destroying your data while eating babies and washing them
down with bath water.

The default replication size was changed to 3 for a very good reason,
there are plenty of threads in this ML about failure scenarios and
probabilities.

Christian

> 
> http://docs.ceph.com/docs/master/rados/operations/placement-groups/#preselection
> 
> Yet in the same page, at the top it says:
> 
> Between 10 and 50 OSDs set pg_num to 4096
> 
> Our use is for shared hosting so there are lots of small writes and 
> reads.  Which of these would be correct?
> 
> Also is it a simple process to update PGs on a live system without 
> affecting service?
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to