Dear list,
Our ceph cluster (ceph version 0.87) is stuck in a warning state with
some OSDs out of their original bucket:
health HEALTH_WARN 1097 pgs degraded; 15 pgs peering; 1 pgs
recovering; 1097 pgs stuck degraded; 16 pgs stuck inactive; 26148 pgs
stuck unclean; 1096 pgs stuck unders
On 07/12/15 05:55, Alex Gorbachev wrote:
> FWIW. Based on the excellent research by Mark Nelson
> (http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/)
> we have dropped SSD journals altogether, and instead went for the
> battery protected controller writeback c
Hello,
thanks to Lionel for writing pretty much what I was going to, in
particular cache sizes and read-ahead cache allocations.
In addition to this keep in mind that all writes still have to happen
twice per drive, journal and actual OSD.
So when the cache is to busy to merge writes nicely yo
Hi,
I'm new to ceph. I setup a small cluster and successfully connected kvm/qemu to
use block devices. Now I'm experimenting with CephFS. I use ceph-fuse on SLES12
(ceph 0.94). I can mount the file-system and write to it, but only when the
admin keyring is present, which gives the FS client ful
Hi John,
I am seeing this problem with Ceph v9.0.1 with the v4.1 kernel on all
nodes. This system is using 4 Ceph FS client systems. They all have
the kernel driver version of CephFS loaded, but none are mounting the
file system. All 4 clients are using the libcephfs VFS interface to
Ganesha NFS
In the last email, I stated the clients were not mounted using the
ceph file system kernel driver. Re-checking the client systems, the
file systems are mounted, but all the IO is going through Ganesha NFS
using the ceph file system library interface.
On Sun, Jul 12, 2015 at 9:02 PM, Eric Eastman
Hi,
I have a requirement wherein I wish to setup Ceph where hostname resolution
is not supported and I just have IP addresses to work with. Is there a way
through which I can achieve this in Ceph? If yes, what are the caveats
associated with that approach?
PS: I am using ceph-deploy for deploymen