[ceph-users] HEALTH_WARN and PGs out of buckets

2015-07-12 Thread Simone Spinelli
Dear list, Our ceph cluster (ceph version 0.87) is stuck in a warning state with some OSDs out of their original bucket: health HEALTH_WARN 1097 pgs degraded; 15 pgs peering; 1 pgs recovering; 1097 pgs stuck degraded; 16 pgs stuck inactive; 26148 pgs stuck unclean; 1096 pgs stuck unders

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-12 Thread Lionel Bouton
On 07/12/15 05:55, Alex Gorbachev wrote: > FWIW. Based on the excellent research by Mark Nelson > (http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/) > we have dropped SSD journals altogether, and instead went for the > battery protected controller writeback c

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-12 Thread Christian Balzer
Hello, thanks to Lionel for writing pretty much what I was going to, in particular cache sizes and read-ahead cache allocations. In addition to this keep in mind that all writes still have to happen twice per drive, journal and actual OSD. So when the cache is to busy to merge writes nicely yo

[ceph-users] cephfs without admin key

2015-07-12 Thread Bernhard Duebi
Hi, I'm new to ceph. I setup a small cluster and successfully connected kvm/qemu to use block devices. Now I'm experimenting with CephFS. I use ceph-fuse on SLES12 (ceph 0.94). I can mount the file-system and write to it, but only when the admin keyring is present, which gives the FS client ful

Re: [ceph-users] mds0: Client failing to respond to cache pressure

2015-07-12 Thread Eric Eastman
Hi John, I am seeing this problem with Ceph v9.0.1 with the v4.1 kernel on all nodes. This system is using 4 Ceph FS client systems. They all have the kernel driver version of CephFS loaded, but none are mounting the file system. All 4 clients are using the libcephfs VFS interface to Ganesha NFS

Re: [ceph-users] mds0: Client failing to respond to cache pressure

2015-07-12 Thread Eric Eastman
In the last email, I stated the clients were not mounted using the ceph file system kernel driver. Re-checking the client systems, the file systems are mounted, but all the IO is going through Ganesha NFS using the ceph file system library interface. On Sun, Jul 12, 2015 at 9:02 PM, Eric Eastman

[ceph-users] Configuring Ceph without DNS

2015-07-12 Thread Abhishek Varshney
Hi, I have a requirement wherein I wish to setup Ceph where hostname resolution is not supported and I just have IP addresses to work with. Is there a way through which I can achieve this in Ceph? If yes, what are the caveats associated with that approach? PS: I am using ceph-deploy for deploymen