we are using Ceph/RADOS in quite a few deployments, one of
which is a production environment hosting VMs (using a
custom/in_house_developed block device layer - based on librados).
We are using Firefly and we would like to go to Hammer.
--
Kind Regards,
Konstantinos
Konstantinos Tompoulidis writes:
>
> Sage Weil ...> writes:
>
> >
> > On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote:
> > > Hi all,
> > >
> > > We recently added many OSDs to our production cluster.
> > > This brought u
Sage Weil writes:
>
> On Mon, 4 Aug 2014, Konstantinos Tompoulidis wrote:
> > Hi all,
> >
> > We recently added many OSDs to our production cluster.
> > This brought us to a point where the number of PGs we had assigned to our
> > main (heavily used) pool
optimal value.
Once the procedure ended we noticed that the output of "ceph df" ( POOLS: )
does not represent the actual state.
Has anyone noticed this before and if so is there a fix?
Thanks in advance,
Konstantinos
___
ceph-users mailing
Hi all,
We have been working closely with Kostis on this and we have some results we
thought we should share.
Increasing the PGs was mandatory for us since we have been noticing
fragmantation* issues on many OSDs. Also, we were below the recommended
number for our main pool for quite some time