Re: [ceph-users] cephfs - disabling cache on client and on OSDs

2015-01-30 Thread Mudit Verma
Hi Greg, Thanks. We need end to end (disk-client to disk-OSD) latency/throughput for READs and WRITEs. Writes can be made Write through but we are having difficulties with read. Thanks Mudit On 31-Jan-2015 5:03 AM, "Gregory Farnum" wrote: > I don't think there's any way to force the OSDs to do

Re: [ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Luke Kao
Thanks Lionel, we are using btrfs compression and it's also stable in our cluster. Currently another minor problem of btrfs fragments is sometimes we see btrfs-transacti process can pause the whole OSD node I/O for seconds, impacting all OSDs on the server. Especially when doing recovery / ba

Re: [ceph-users] No auto-mount of OSDs after server reboot

2015-01-30 Thread Anthony D'Atri
One thing than can cause this is messed-up partition ID's / typecodes. Check out the ceph-disk script to see how they get applied. I have a few systems that somehow got messed up -- at boot they don't get started, but if I mounted them manually on /mnt, checked out the whoami file and remoun

Re: [ceph-users] cephfs - disabling cache on client and on OSDs

2015-01-30 Thread Gregory Farnum
I don't think there's any way to force the OSDs to do that. What exactly are you trying to do? -Greg On Fri, Jan 30, 2015 at 4:02 AM, Mudit Verma wrote: > Hi All, > > We are working on a project where we are planning to use Ceph as storage. > However, for one experiment we are required to disable

[ceph-users] calamari server error 503 detail rpc error lost remote after 10s heartbeat

2015-01-30 Thread Tony
Hi, I have ceph giant installed and installed/compiled calamari but getting "calamari server error 503 detail rpc error lost remote after 10s heartbeat" It seems calamari doesn't have contact with ceph for some reason. Anyway to configure calamari manually to get status and fix the 503 error?

Re: [ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Udo Lembke
Hi Bruce, you can also look on the mon, like ceph --admin-daemon /var/run/ceph/ceph-mon.b.asok config show | grep cache (I guess you have an number instead of the .b. ) Udo On 30.01.2015 22:02, Bruce McFarland wrote: > > The ceph daemon isn’t running on the client with the rbd device so I > can’t

Re: [ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Bruce McFarland
The ceph daemon isn't running on the client with the rbd device so I can't verify if it's disabled at the librbd level on the client. If you mean on the storage nodes I've had some issues dumping the config. Does the rbd caching occur on the storage nodes, client, or both? From: Udo Lembke [ma

Re: [ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Udo Lembke
Hi Bruce, hmm, sounds for me like the rbd cache. Can you look, if the cache is realy disabled in the running config with ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep cache Udo On 30.01.2015 21:51, Bruce McFarland wrote: > > I have a cluster and have created a rbd device -

[ceph-users] RBD caching on 4K reads???

2015-01-30 Thread Bruce McFarland
I have a cluster and have created a rbd device - /dev/rbd1. It shows up as expected with 'rbd -image test info' and rbd showmapped. I have been looking at cluster performance with the usual Linux block device tools - fio and vdbench. When I look at writes and large block sequential reads I'm see

Re: [ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Lionel Bouton
On 01/30/15 14:24, Luke Kao wrote: > > Dear ceph users, > > Has anyone tried to add autodefrag and mount option when use btrfs as > the osd storage? > > > > In some previous discussion that btrfs osd startup becomes very slow > after used for some time, just thinking about add autodefrag will hel

[ceph-users] Moving a Ceph cluster (to a new network)

2015-01-30 Thread Don Doerner
All, I built up a ceph system on my little development network, then tried to move it to a different network. I edited the ceph.conf file, and fired it up and... well, I discovered that I was a bit naive. I looked through the documentation pretty carefully, and I can't see any list of places

Re: [ceph-users] error in sys.exitfunc

2015-01-30 Thread Travis Rhoden
Hi Karl, Sorry that I missed this go by. If you are still hitting this issue, I'd like to help you and figure this one out, especially since you are not the only person to have hit it. Can you pass along your system details, (OS, version, etc.). I'd also like to know how you installed ceph-depl

Re: [ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Mark Nelson
About a year ago I was talking to j On 01/30/2015 07:24 AM, Luke Kao wrote: Dear ceph users, Has anyone tried to add autodefrag and mount option when use btrfs as the osd storage? Sort of. About a year ago I was looking into it, but Josef told me not to use either defrag or autodefrag. (esp

Re: [ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Mark Nelson
oops, mangled the first part of that reply a bit. Need my morning coffee. :) On 01/30/2015 07:56 AM, Mark Nelson wrote: About a year ago I was talking to j On 01/30/2015 07:24 AM, Luke Kao wrote: Dear ceph users, Has anyone tried to add autodefrag and mount option when use btrfs as the osd

[ceph-users] btrfs backend with autodefrag mount option

2015-01-30 Thread Luke Kao
Dear ceph users, Has anyone tried to add autodefrag and mount option when use btrfs as the osd storage? In some previous discussion that btrfs osd startup becomes very slow after used for some time, just thinking about add autodefrag will help. We will add on our test cluster first to see

[ceph-users] cephfs - disabling cache on client and on OSDs

2015-01-30 Thread Mudit Verma
Hi All, We are working on a project where we are planning to use Ceph as storage. However, for one experiment we are required to disable the caching on OSDs and on client. We want any data transaction in the filesystem to be served directly from OSDs disk, without any cache involvement in between

Re: [ceph-users] No auto-mount of OSDs after server reboot

2015-01-30 Thread James Eckersall
I'm running Ubuntu 14.04 servers with Firefly and I don't have a sysvinit file, but I do have an upstart file. "touch /var/lib/ceph/osd/ceph-XX/upstart" should be all you need to do. That way, the OSD's should be mounted automatically on boot. On 30 January 2015 at 10:25, Alexis KOALLA wrote: >

Re: [ceph-users] No auto-mount of OSDs after server reboot

2015-01-30 Thread Alexis KOALLA
Hi Lindsay and Daniel Thanks for your replies. Apologize for not specifying my LAB env details : Here is the details: OS: Ubuntu 14.04 LTS, Kernel 3.8.0-29-generic Ceph version: Firefly 0.80.8 env: LAB @Lindsay : I'm wonderring if putting the mount command in fstab is new to ceph or it is recom

Re: [ceph-users] mon leveldb loss

2015-01-30 Thread Sebastien Han
Hi Mike, Sorry to hear that, I hope this can help you to recover your RBD images: http://www.sebastien-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/ Since you don’t have your monitors, you can still walk through the OSD data dir and look for the rbd identifiers. Something