[ceph-users] Openstack glance ceph rbd_store_user authentification problem

2013-12-10 Thread Vikrant Verma
Hi Steffen, WIth respect to your post as mentioned in the below link http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003370.html I am facing the same issue, here is my error log from api.log "2013-12-10 02:47:36.156 32509 TRACE glance.api.v1.upload_utils File "/usr/lib/python2

Re: [ceph-users] Openstack glance ceph rbd_store_user authentification problem

2013-12-10 Thread Karan Singh
Hi Vikrant Can you share ceph auth list and your glance-api.conf file output. What are your plans with respect to configuration , what you want to achieve. Many Thanks Karan Singh - Original Message - From: "Vikrant Verma" To: thorh...@iti.cs.uni-magdeburg.de Cc: ceph-users@li

Re: [ceph-users] Anybody doing Ceph for OpenStack with OSDs across compute/hypervisor nodes?

2013-12-10 Thread Chris Hoy Poy
Ceph can be quite hard on CPU at times, so I would avoid this unless you have lots of CPU cycles to spare as well. \C - Original Message - From: "Blair Bethwaite" To: ceph-users@lists.ceph.com Sent: Tuesday, 10 December, 2013 10:04:01 AM Subject: [ceph-users] Anybody doing Ceph fo

Re: [ceph-users] ceph reliability in large RBD setups

2013-12-10 Thread Kyle Bader
> I've been running similar calculations recently. I've been using this > tool from Inktank to calculate RADOS reliabilities with different > assumptions: > https://github.com/ceph/ceph-tools/tree/master/models/reliability > > But I've also had similar questions about RBD (or any multi-part files

[ceph-users] Full OSD

2013-12-10 Thread Łukasz Jagiełło
Hi, Today my ceph cluster suffer of such problem: #v+ root@dfs-s1:/var/lib/ceph/osd/ceph-1# df -h | grep ceph-1 /dev/sdc1 559G 438G 122G 79% /var/lib/ceph/osd/ceph-1 #v- Disk report 122GB free space. Looks ok but: #v+ root@dfs-s1:/var/lib/ceph/osd/ceph-1# touch aaa touch: cannot touch

Re: [ceph-users] Impact of fancy striping

2013-12-10 Thread Craig Lewis
A general rule of thumb for separate journal devices is to use 1 SSD for every 4 OSDs. Since SSDs have no seek penalty, 4 partitions are fine. Going much above the 1:4 ratio can saturate the SSD. On your SAS journal device, by creating 9 partitions, you're forcing head seeks for every journa

Re: [ceph-users] Failed to execute command: ceph-disk list

2013-12-10 Thread Mark Kirkwood
On 10/12/13 03:34, Alfredo Deza wrote: On Sat, Dec 7, 2013 at 7:17 PM, Mark Kirkwood wrote: On 08/12/13 12:14, Mark Kirkwood wrote: I wonder if it might be worth adding a check at the start of either ceph-deploy to look for binaries we are gonna need. ...growl: either ceph-deploy *or ceph-d