Re: [ceph-users] ceph-osd pegging CPU on giant, no snapshots involved this time

2015-02-19 Thread Florian Haas
On Wed, Feb 18, 2015 at 10:27 PM, Florian Haas wrote: > On Wed, Feb 18, 2015 at 9:32 PM, Mark Nelson wrote: >> On 02/18/2015 02:19 PM, Florian Haas wrote: >>> >>> Hey everyone, >>> >>> I must confess I'm still not fully understanding this problem and >>> don't exactly know where to start digging

Re: [ceph-users] Updating monmap

2015-02-19 Thread SUNDAY A. OLUTAYO
After the injection of the new monmap done. the monmap dump still reveal the old one. The old one is not overwritten to the new one Thanks, Sunday Olutayo Sadeeb Technologies Ltd 7 Mayegun Street, Ojo Lagos State, Nigeria. Tel: +234 1 7404524 D/L: +234 1 8169922 Cell: +234 8054600338,

Re: [ceph-users] Updating monmap

2015-02-19 Thread SUNDAY A. OLUTAYO
I was trying to remove the (mon-node) without replacing it. What really happen is this, I was adding a new mon-node after an initial successful cluster setup with one mon-node but addition was not successful and it already affected the monmap, trying to remove with "ceph mon remove" or ceph-de

[ceph-users] Ceph Tech Talks

2015-02-19 Thread Patrick McGarry
Hey cephers, We're opening up our Ceph Tech Talks to the broader Ceph community. So, for those of you who would like to attend and ask questions in-person, the next one is a week from today! http://ceph.com/ceph-tech-talks/ Last month Sam Just gave a great talk on the inner workings of RADOS, an

Re: [ceph-users] Updating monmap

2015-02-19 Thread Brian Andrus
Hi Sunday, did you verify the contents of your monmap? In general, the procedure might look something like this: - ceph-mon -i --extract-monmap /tmp/monmap - monmaptool --print /tmp/monmap - monmaptool --rm --add --clobber /tmp/monmap - monmaptool --print /tmp/monmap at this p

Re: [ceph-users] Privileges for read-only CephFS access?

2015-02-19 Thread Florian Haas
On Thu, Feb 19, 2015 at 12:50 AM, Gregory Farnum wrote: > On Wed, Feb 18, 2015 at 3:30 PM, Florian Haas wrote: >> On Wed, Feb 18, 2015 at 11:41 PM, Gregory Farnum wrote: >>> On Wed, Feb 18, 2015 at 1:58 PM, Florian Haas wrote: On Wed, Feb 18, 2015 at 10:28 PM, Oliver Schulz wrote: > D

Re: [ceph-users] rbd: I/O Errors in low memory situations

2015-02-19 Thread Mike Christie
On 02/18/2015 06:05 PM, "Sebastian Köhler [Alfahosting GmbH]" wrote: > Hi, > > yesterday we had had the problem that one of our cluster clients > remounted a rbd device in read-only mode. We found this[1] stack trace > in the logs. We investigated further and found similar traces on all > other ma

Re: [ceph-users] ceph-osd pegging CPU on giant, no snapshots involved this time

2015-02-19 Thread Lindsay Mathieson
On Thu, 19 Feb 2015 05:56:46 PM Florian Haas wrote: > As it is, a simple "perf top" basically hosing the system wouldn't be > something that is generally considered expected. Could the disk or controller be failing? ___ ceph-users mailing list ceph-user

[ceph-users] Minor version difference between monitors and OSDs

2015-02-19 Thread Christian Balzer
Hello, I have a cluster currently at 0.80.1 and would like to upgrade it to 0.80.7 (Debian as you can guess), but for a number of reasons I can't really do it all at the same time. In particular I would like to upgrade the primary monitor node first and the secondary ones as well as the OSDs lat

[ceph-users] new ssd intel s3610, has somebody tested them ?

2015-02-19 Thread Alexandre DERUMIER
Hi, Intel has just released new ssd s3610: http://www.anandtech.com/show/8954/intel-launches-ssd-dc-s3610-s3710-enterprise-ssds endurance is 10x bigger than 3500, for 10% cost addition. Has somebody already tested them ? Regards, Alexandre ___ ceph-

Re: [ceph-users] rbd: I/O Errors in low memory situations

2015-02-19 Thread Ilya Dryomov
On Fri, Feb 20, 2015 at 2:21 AM, Mike Christie wrote: > On 02/18/2015 06:05 PM, "Sebastian Köhler [Alfahosting GmbH]" wrote: >> Hi, >> >> yesterday we had had the problem that one of our cluster clients >> remounted a rbd device in read-only mode. We found this[1] stack trace >> in the logs. We in