[ceph-users] Python APIs

2013-06-11 Thread Giuseppe "Gippa" Paternò
Hi! Sorry for the dumb question, could you point me out to the Python APIs reference docs for the object store? Do you have example to share for reading files/dirs? Thanks, Giuseppe ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ce

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-06-11 Thread Wolfgang Hennerbichler
On 06/10/2013 07:35 PM, Stephane Boisvert wrote: > Hi, >I wondering how safe it is to use rbd cache = truewith libvirt/qemu. > I did read the documentation and it says "When the OS sends a barrier > or a flush request, all dirty data is written to the OSDs. This means > that using write-back

[ceph-users] Glance & RBD Vs Glance & RadosGW

2013-06-11 Thread Alvaro Izquierdo Jimeno
Hi all, I want to connect an openstack Folsom glance service to ceph. The first option is setting up the glance-api.conf with 'default_store=rbd' and the user and pool. The second option is defined in https://blueprints.launchpad.net/glance/+spec/ceph-s3-gateway (An OpenStack installation tha

[ceph-users] ceph-deploy gatherkeys trouble

2013-06-11 Thread Peter Wienemann
Hi, I have problems with "ceph-deploy gatherkeys" in cuttlefish. When I run ceph-deploy gatherkeys mon01 on my admin node, I get Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ['mon01'] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['mon01'] In an attempt to

[ceph-users] OSD

2013-06-11 Thread Roman Dilken
Hi, perhaps a silly question, but am I right that the osd have to be mounted via fstab? Today I started my testcluster and it worked after mounting the OSD-partitions manually. Greetings, Roman ___ ceph-users mailing list ceph-users@lists.ceph.com htt

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-06-11 Thread Stephane Boisvert
Thanks for your answer that was exactly what I was looking for  ! We'll go forward with that cache setting. ! Stephane On 13-06-11 05:24 AM, Wolfgang Hennerbichler wrote: On 06/10/2013 07:35 PM, Stephane Boisvert wrote:

Re: [ceph-users] ceph-deploy gatherkeys trouble

2013-06-11 Thread Gregory Farnum
These keys are created by the ceph-create-keys script, which should be launched when your monitors are. It requires a monitor quorum to have formed first. -Greg On Tuesday, June 11, 2013, Peter Wienemann wrote: > Hi, > > I have problems with "ceph-deploy gatherkeys" in cuttlefish. When I run > >

Re: [ceph-users] OSD

2013-06-11 Thread Gregory Farnum
It shouldn't matter. The OSDs are generally activated when the disk is mounted, however that happens. -Greg On Tuesday, June 11, 2013, Roman Dilken wrote: > Hi, > > perhaps a silly question, but am I right that the osd have to be mounted > via fstab? > > Today I started my testcluster and it work

[ceph-users] CephFS emptying files or silently failing to mount?

2013-06-11 Thread Bo
howdy, y'all. we are testing ceph and all of its features. we love RBD! however cephFS, though clearly stated not production ready, has been stonewalling us. in an attempt to get rolling quickly, we followed some guides on CephFS ( http://goo.gl/BmVxG, http://goo.gl/1VtNk). when i mount CephFS, i

Re: [ceph-users] CephFS emptying files or silently failing to mount?

2013-06-11 Thread Gregory Farnum
On Tue, Jun 11, 2013 at 9:39 AM, Bo wrote: > howdy, y'all. > > we are testing ceph and all of its features. we love RBD! however cephFS, > though clearly stated not production ready, has been stonewalling us. in an > attempt to get rolling quickly, we followed some guides on CephFS > (http://goo.g

Re: [ceph-users] CephFS emptying files or silently failing to mount?

2013-06-11 Thread Bo
Holy cow. Thank you for pointing out what should have been obvious. So glad these emails are kept on the web for future searchers like me ;) -bo On Tue, Jun 11, 2013 at 11:46 AM, Gregory Farnum wrote: > On Tue, Jun 11, 2013 at 9:39 AM, Bo wrote: > > howdy, y'all. > > > > we are testing ceph

[ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-11 Thread w sun
Hi, We are currently testing the performance with rbd caching enabled with write-back mode on our openstack (grizzly) nova nodes. By default, nova fires up the rbd volumes with "if=none" mode evidenced by the following cmd line from "ps | grep". -drive file=rbd:ceph-openstack-volumes/volume-949

Re: [ceph-users] QEMU -drive setting (if=none) for rbd

2013-06-11 Thread Oliver Francke
Hi, Am 11.06.2013 um 19:14 schrieb w sun : > Hi, > > We are currently testing the performance with rbd caching enabled with > write-back mode on our openstack (grizzly) nova nodes. By default, nova fires > up the rbd volumes with "if=none" mode evidenced by the following cmd line > from "ps |

Re: [ceph-users] Python APIs

2013-06-11 Thread John Wilkins
Here are the libraries for the Ceph Object Store. http://ceph.com/docs/master/radosgw/s3/python/ http://ceph.com/docs/master/radosgw/swift/python/ On Tue, Jun 11, 2013 at 2:17 AM, "Giuseppe \"Gippa\" Paternò" wrote: > Hi! Sorry for the dumb question, could you point me out to the Python > APIs r

Re: [ceph-users] ceph-deploy gatherkeys trouble

2013-06-11 Thread Peter Wienemann
Hi Greg, thanks for this very useful hint. I found the origin of the problem and will open a bug report in Redmine. Cheers, Peter On 06/11/2013 05:58 PM, Gregory Farnum wrote: These keys are created by the ceph-create-keys script, which should be launched when your monitors are. It requires

[ceph-users] More data corruption issues with RBD (Ceph 0.61.2)

2013-06-11 Thread Guido Winkelmann
Hi, I'm having issues with data corruption on RBD volumes again. I'm using RBD volumes for virtual harddisks for qemu-kvm virtual machines. Inside these virtual machines I have been running a C++ program (attached) that fills a mounted filesystem with 1 Megabyte files of random data, while usi

Re: [ceph-users] RDB

2013-06-11 Thread John Wilkins
Gary, I've added that instruction to the docs. It should be up shortly. Let me know if you have other feedback for the docs. Regards, John On Mon, Jun 10, 2013 at 9:13 AM, Gary Bruce wrote: > Hi again, > > I don't see anything in http://ceph.com/docs/master/start/ that mentions > installing ce

[ceph-users] Moving an MDS

2013-06-11 Thread Bryan Stillwell
I have a cluster I originally built on argonaut and have since upgraded it to bobtail and then cuttlefish. I originally configured it with one node for both the mds node and mon node, and 4 other nodes for hosting osd's: a1: mon.a/mds.a b1: osd.0, osd.1, osd.2, osd.3, osd.4, osd.20 b2: osd.5, osd

Re: [ceph-users] Moving an MDS

2013-06-11 Thread Gregory Farnum
On Tue, Jun 11, 2013 at 2:35 PM, Bryan Stillwell wrote: > I have a cluster I originally built on argonaut and have since > upgraded it to bobtail and then cuttlefish. I originally configured > it with one node for both the mds node and mon node, and 4 other nodes > for hosting osd's: > > a1: mon.

Re: [ceph-users] Moving an MDS

2013-06-11 Thread Bryan Stillwell
On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum wrote: > You should not run more than one active MDS (less stable than a > single-MDS configuration, bla bla bla), but you can run multiple > daemons and let the extras serve as a backup in case of failure. The > process for moving an MDS is pretty e

Re: [ceph-users] Moving an MDS

2013-06-11 Thread Gregory Farnum
On Tue, Jun 11, 2013 at 3:04 PM, Bryan Stillwell wrote: > On Tue, Jun 11, 2013 at 3:50 PM, Gregory Farnum wrote: >> You should not run more than one active MDS (less stable than a >> single-MDS configuration, bla bla bla), but you can run multiple >> daemons and let the extras serve as a backup i

Re: [ceph-users] ceph-deploy gatherkeys trouble

2013-06-11 Thread Joshua Mesilane
Hi, I had this happen the first time I tried to deploy to CentOS/RHEL system. Are you running one of these systems? I found that by turning off iptables, and disabling selinux, this allowed me to get my test cluster up and running. Not sure if that's the best approach for production however.

[ceph-users] ceph-deploy questions

2013-06-11 Thread Scottix
Hi Everyone, I am new to ceph but loving every moment of it. I am learning all of this now, so maybe this will help with documentation. Anyway, I have a few question about ceph-deploy. I was able to setup a cluster and be able to get it up and running no problem with ubuntu 12.04.2 that isn't the

[ceph-users] Building ceph from source

2013-06-11 Thread Sridhar Mahadevan
Hi, I am now trying to setup ceph 0.61 build from source. I have built it and I have defined the config file in /etc/ceph/ceph.conf. [mon] mon data = /mnt/mon$id [mon.0] host = dsi mon addr = 10.217.242.28:6789 I created the directory /mnt/mon0. The hostname dsi res

[ceph-users] Issue with deploying OSD

2013-06-11 Thread Luke Jing Yuan
Hi, I had not been able to use ceph-deploy to prepare the OSDs. It seemed every time I execute this particular command (assuming running the data and journal on a same disk), I ended up with a message: ceph-disk: Error: Command '['partprobe','/dev/cciss/c0d1']' returned non-zero exit status 1

[ceph-users] Devel Subscribe

2013-06-11 Thread Renkic Lausi
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com