Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-21 Thread Josef Johansson
I'm neither a dev or a well informed Cepher. But I've seen posts that the pg count may be set too high, see https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16205.html Also, we use 128GB+ in production on the OSD servers with 10 osd per server because it boosts the read cache,so you may w

[ceph-users] ceph object storage meters added to openstack ceilometer

2015-03-21 Thread M Ranga Swami Reddy
Hi All, We have added the ceph object storage meters to openstack ceilometer. Here is the openstack doc update: http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-ceph-object-storage-metrics.html Thanks Swami ___ ceph-users mailing

Re: [ceph-users] OSD Forece Removal

2015-03-21 Thread Kobi Laredo
*ceph osd rm osd.#* should do the trick. *Kobi Laredo* *Cloud Systems Engineer* | (*408) 409-KOBI* On Fri, Mar 20, 2015 at 4:02 PM, Robert LeBlanc wrote: > Yes, at this point, I'd export the CRUSH, edit it and import it back in. > What version are you running? > > Robert LeBlanc > > Sent from

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-21 Thread Andrei Mikhailovsky
In a long term use I also had some issues with flashcache and enhanceio. I've noticed frequent slow requests. Andrei - Original Message - > From: "Robert LeBlanc" > To: "Nick Fisk" > Cc: ceph-users@lists.ceph.com > Sent: Friday, 20 March, 2015 8:14:16 PM > Subject: Re: [ceph-users]

[ceph-users] How does crush selects different osds using hash(pg) in diferent iterations

2015-03-21 Thread shylesh kumar
Hi , I was going through this simplified crush algorithm given in ceph website. def crush(pg): all_osds = ['osd.0', 'osd.1', 'osd.2', ...] result = [] # size is the number of copies; primary+replicas while len(result) < size: --> *r = hash(pg)* chosen = all_osds[ r % len(all

Re: [ceph-users] Replacing a failed OSD disk drive (or replace XFS with BTRFS)

2015-03-21 Thread Robert LeBlanc
When you reformat the drive, it generates a new UUID so to Ceph it is as if it was a brand new drive. This does seem heavy handed, but ceph was designed for things to fail and it is not unusual to do things this way. Ceph is not RAID so you usually have to do some unthinking. You could probably ke

Re: [ceph-users] PHP Rados failed in read operation if object size is large (say more than 10 MB )

2015-03-21 Thread Gaurang Vyas
Fixed https://github.com/gdvyas/phprados/blob/master/rados.c Please update main source if it looks correct. On Fri, Mar 20, 2015 at 1:11 PM, Gaurang Vyas wrote: > If I run from command prompt it gives below error in $piece = > rados_read($ioRados, 'TEMP_object',$pieceSize['psize'] ,0); > > >

[ceph-users] Replacing a failed OSD disk drive (or replace XFS with BTRFS)

2015-03-21 Thread Datatone Lists
I have been experimenting with Ceph, and have some OSDs with drives containing XFS filesystems which I want to change to BTRFS. (I started with BTRFS, then started again from scratch with XFS [currently recommended] in order to eleminate that as a potential cause of some issues, now with further ex