[ceph-users] Ceph repository for Debian Jessie

2015-08-26 Thread Konstantinos
we are using Ceph/RADOS in quite a few deployments, one of which is a production environment hosting VMs (using a custom/in_house_developed block device layer - based on librados). We are using Firefly and we would like to go to Hammer. -- Kind Regards, Konstantinos

Re: [ceph-users] Erroneous stats output (ceph df) after increasing PG number

2014-08-05 Thread Konstantinos Tompoulidis
Konstantinos Tompoulidis writes: > > Sage Weil ...> writes: > > > > > On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote: > > > Hi all, > > > > > > We recently added many OSDs to our production cluster. > > > This brought u

Re: [ceph-users] Erroneous stats output (ceph df) after increasing PG number

2014-08-04 Thread Konstantinos Tompoulidis
Sage Weil writes: > > On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote: > > Hi all, > > > > We recently added many OSDs to our production cluster. > > This brought us to a point where the number of PGs we had assigned to our > > main (heavily used) pool

[ceph-users] Erronous stats output (ceph df) after increasing PG number

2014-08-04 Thread Konstantinos Tompoulidis
optimal value. Once the procedure ended we noticed that the output of "ceph df" ( POOLS: ) does not represent the actual state. Has anyone noticed this before and if so is there a fix? Thanks in advance, Konstantinos ___ ceph-users mailing

Re: [ceph-users] Throttle pool pg_num/pgp_num increase impact

2014-07-31 Thread Konstantinos Tompoulidis
Hi all, We have been working closely with Kostis on this and we have some results we thought we should share. Increasing the PGs was mandatory for us since we have been noticing fragmantation* issues on many OSDs. Also, we were below the recommended number for our main pool for quite some time