Yes, this is normal.  The pgmap version updates continuously even on an
idle system, because it is incremented when the periodic reports on PG
status are received by the mon from the osds.

It's a bit annoying if you want to set something else up to update when the
pg status changes - in that case you have to actually examine the parts of
the pg status you care about (e.g. 'pgs brief') to see if values are
different  :-/

Cheers,
John


On Wed, Apr 30, 2014 at 4:21 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> I'm testing an idle ceph cluster.
> my pgmap version is always increasing, is this normal ?
>
> 2014-04-30 17:20:41.934127 mon.0 [INF] pgmap v281: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:42.962033 mon.0 [INF] pgmap v282: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:35.373060 osd.4 [INF] 0.179 scrub ok
> 2014-04-30 17:20:37.373338 osd.4 [INF] 0.7a scrub ok
> 2014-04-30 17:20:38.373606 osd.4 [INF] 0.1ba scrub ok
> 2014-04-30 17:20:43.990160 mon.0 [INF] pgmap v283: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:46.361545 mon.0 [INF] pgmap v284: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:48.438894 mon.0 [INF] pgmap v285: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:44.297707 osd.2 [INF] 2.26 scrub ok
> 2014-04-30 17:20:46.297851 osd.2 [INF] 2.27 scrub ok
> 2014-04-30 17:20:48.298423 osd.2 [INF] 2.29 scrub ok
> 2014-04-30 17:20:51.931978 mon.0 [INF] pgmap v286: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:46.374796 osd.4 [INF] 0.3e scrub ok
> 2014-04-30 17:20:48.375078 osd.4 [INF] 1.2 scrub ok
> 2014-04-30 17:20:50.375458 osd.4 [INF] 1.3d scrub ok
> 2014-04-30 17:20:51.375821 osd.4 [INF] 2.1 scrub ok
> 2014-04-30 17:20:52.376033 osd.4 [INF] 2.3c scrub ok
> 2014-04-30 17:20:53.954350 mon.0 [INF] pgmap v287: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:56.364735 mon.0 [INF] pgmap v288: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
> 2014-04-30 17:20:53.299142 osd.2 [INF] 2.2c scrub ok
> 2014-04-30 17:20:58.299835 osd.2 [INF] 2.3d scrub ok
> 2014-04-30 17:21:01.932738 mon.0 [INF] pgmap v289: 640 pgs: 640
> active+clean; 0 bytes data, 333 MB used, 14896 GB / 14896 GB avail
>
>
>
> cluster doesn nothing at this time.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to