I'm neither a dev or a well informed Cepher. But I've seen posts that the
pg count may be set too high, see
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16205.html
Also, we use 128GB+ in production on the OSD servers with 10 osd per server
because it boosts the read cache,so you may w
Hi All,
We have added the ceph object storage meters to openstack ceilometer.
Here is the openstack doc update:
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-ceph-object-storage-metrics.html
Thanks
Swami
___
ceph-users mailing
*ceph osd rm osd.#* should do the trick.
*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*
On Fri, Mar 20, 2015 at 4:02 PM, Robert LeBlanc
wrote:
> Yes, at this point, I'd export the CRUSH, edit it and import it back in.
> What version are you running?
>
> Robert LeBlanc
>
> Sent from
In a long term use I also had some issues with flashcache and enhanceio. I've
noticed frequent slow requests.
Andrei
- Original Message -
> From: "Robert LeBlanc"
> To: "Nick Fisk"
> Cc: ceph-users@lists.ceph.com
> Sent: Friday, 20 March, 2015 8:14:16 PM
> Subject: Re: [ceph-users]
Hi ,
I was going through this simplified crush algorithm given in ceph website.
def crush(pg):
all_osds = ['osd.0', 'osd.1', 'osd.2', ...]
result = []
# size is the number of copies; primary+replicas
while len(result) < size:
--> *r = hash(pg)*
chosen = all_osds[ r % len(all
When you reformat the drive, it generates a new UUID so to Ceph it is as if
it was a brand new drive. This does seem heavy handed, but ceph was
designed for things to fail and it is not unusual to do things this way.
Ceph is not RAID so you usually have to do some unthinking.
You could probably ke
Fixed
https://github.com/gdvyas/phprados/blob/master/rados.c
Please update main source if it looks correct.
On Fri, Mar 20, 2015 at 1:11 PM, Gaurang Vyas wrote:
> If I run from command prompt it gives below error in $piece =
> rados_read($ioRados, 'TEMP_object',$pieceSize['psize'] ,0);
>
>
>
I have been experimenting with Ceph, and have some OSDs with drives
containing XFS filesystems which I want to change to BTRFS.
(I started with BTRFS, then started again from scratch with XFS
[currently recommended] in order to eleminate that as a potential cause
of some issues, now with further ex