Re: [ceph-users] low power single disk nodes

2015-04-10 Thread Josef Johansson
Hi, You have these guys as well, http://www.seagate.com/gb/en/products/enterprise-servers-storage/nearline-storage/kinetic-hdd/ I talked to them during WHD, and they said that it's not fit for ceph if you pack 70 of them in one chassi because of the noise level. I would assume that 1U wirh alot o

Re: [ceph-users] ceph-osd failure following 0.92 -> 0.94 upgrade

2015-04-10 Thread Dirk Grunwald
I've gone through the ceph-users mailing list and the only suggested fix (by Sage) was to roll back to V0.92, do ceph-osd -i NNN --flush-journal and then upgrade to V0.93 (which was the issue at the time). However, I've done that and the V0.92 code faults for a different reason, which I suspect is

Re: [ceph-users] live migration fails with image on ceph

2015-04-10 Thread Josh Durgin
On 04/08/2015 09:37 PM, Yuming Ma (yumima) wrote: Josh, I think we are using plain live migration and not mirroring block drives as the other test did. Do you have the migration flags or more from the libvirt log? Also which versions of qemu is this? The libvirt log message about qemuMigratio

Re: [ceph-users] CentOS 7.1: Upgrading (downgrading) from 0.80.9 to bundled rpms

2015-04-10 Thread Ken Dreyer
Hi Dan, Arne, KB, I got a chance to look into this this afternoon. In a CentOS 7.1 VM (that's not using EPEL), I found that ceph.com's 0.80.9 fails to install due to an Epoch issue. I've opened a ticket for that: http://tracker.ceph.com/issues/11371 I think you're asking about the reverse, though

Re: [ceph-users] deep scrubbing causes osd down

2015-04-10 Thread Haomai Wang
It looks like deep scrub cause the disk busy and some threads blocking on this. Maybe you could lower the scrub related configurations and see the disk util when deep-scrubing. On Sat, Apr 11, 2015 at 3:01 AM, Andrei Mikhailovsky wrote: > Hi guys, > > I was wondering if anyone noticed that the d