Le 13/02/2016 06:31, Christian Balzer a écrit :
> [...] > --- > So from shutdown to startup about 2 seconds, not that bad. >
However here is where the cookie crumbles massively: > --- > 2016-02-12
01:33:50.263152 7f75be4d57c0 0 filestore(/var/lib/ceph/osd/ceph-2)
limited size xattrs > 2016-02-12 0
Hello,
On Sat, 13 Feb 2016 11:14:23 +0100 Lionel Bouton wrote:
> Le 13/02/2016 06:31, Christian Balzer a écrit :
> > [...] > --- > So from shutdown to startup about 2 seconds, not that
> > bad. >
> However here is where the cookie crumbles massively: > --- > 2016-02-12
> 01:33:50.263152 7f75be4d
Greg, Thats very useful info. I had not queried the admin sockets before
today, so I am learning new things!
on the x86_64: mds, mon, and osd, and rbd + cephfs client
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
On the arm7 nodes: mon, osd, and rbd + cephfs clients
ceph versio
Hi,
Le 13/02/2016 15:52, Christian Balzer a écrit :
> [..]
>
> Hum that's surprisingly long. How much data (size and nb of files) do
> you have on this OSD, which FS do you use, what are the mount options,
> what is the hardware and the kind of access ?
>
> I already mentioned the HW, Areca RAID c
Thanks Nick,
seems ceph has big performance gap on all ssd setup. Software latency can
be a bottleneck.
https://ceph.com/planet/the-ceph-and-tcmalloc-performance-story/
http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/20150813_S303E_Zhang.pdf
http://events.linuxfoundation.org/
> > Next this : > --- > 2016-02-12 01:35:33.915981 7f75be4d57c0 0 osd.2
> > 1788 load_pgs 2016-02-12 01:36:32.989709 7f75be4d57c0 0 osd.2 1788
> > load_pgs opened
> 564 pgs > --- > Another minute to load the PGs.
> Same OSD reboot as above : 8 seconds for this.
Do you really have 564 pgs on a si
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm still going to see if I can get Ceph clients to hardly notice that
an OSD comes back in. Our set up is EXT4 and our SSDs have the hardest
time with the longest recovery impact. It should be painless no matter
how slow the drives/CPU/etc are. If i
Hello,
I was about to write something very much along these lines, thanks for
beating me to it. ^o^
On Sat, 13 Feb 2016 21:50:17 -0700 Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I'm still going to see if I can get Ceph clients to hardly notice that
> an OSD c
On Sat, 13 Feb 2016 20:51:19 -0700 Tom Christensen wrote:
> > > Next this : > --- > 2016-02-12 01:35:33.915981 7f75be4d57c0 0 osd.2
> > > 1788 load_pgs 2016-02-12 01:36:32.989709 7f75be4d57c0 0 osd.2 1788
> > > load_pgs opened
> > 564 pgs > --- > Another minute to load the PGs.
> > Same OSD rebo
On Sat, Feb 13, 2016 at 8:14 AM, Blade Doyle wrote:
> Greg, Thats very useful info. I had not queried the admin sockets before
> today, so I am learning new things!
>
> on the x86_64: mds, mon, and osd, and rbd + cephfs client
> ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
>
>
10 matches
Mail list logo