Re: [ceph-users] High disk utilisation

2015-11-29 Thread MATHIAS, Bryn (Bryn)
Hi Christian, I’ll give you a much better dump of detail :) Running RHEL 7.1, ceph version 0.94.5 all ceph disks are xfs, with journals on a partition on the disk Disks: 6Tb spinners. Erasure coded pool with 4+1 EC ISA-L also. No scrubbing reported in the ceph log, the cluster isn’t old enough

Re: [ceph-users] High disk utilisation

2015-11-29 Thread Christian Balzer
Hello, On Mon, 30 Nov 2015 07:15:35 + MATHIAS, Bryn (Bryn) wrote: > Hi All, > > I am seeing an issue with ceph performance. > Starting from an empty cluster of 5 nodes, ~600Tb of storage. > It would be helpful to have more details (all details in fact) than this. Complete HW, OS, FS used, C

[ceph-users] High disk utilisation

2015-11-29 Thread MATHIAS, Bryn (Bryn)
Hi All, I am seeing an issue with ceph performance. Starting from an empty cluster of 5 nodes, ~600Tb of storage. monitoring disk usage in nmon I see rolling 100% usage of a disk. Ceph -w doesn’t report any spikes in throughput and the application putting data is not spiking in the load generate

Re: [ceph-users] Ceph OSD: Memory Leak problem

2015-11-29 Thread prasad pande
On Sun, Nov 29, 2015 at 11:51 AM, Somnath Roy wrote: > traceroute Hi Somnath, Thanks for the quick response. My MTU is the default one that is 1500 *1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8

[ceph-users] python3 librados

2015-11-29 Thread misa-ceph
Hi everyone, for my pet project I've needed python3 rados library. So I've took the existing python2 rados code and clean it up a little bit to fit my needs. The lib contains basic interface, asynchronous operations and also asyncio wrapper for convenience in asyncio programs. If you are inter

Re: [ceph-users] Ceph OSD: Memory Leak problem

2015-11-29 Thread Somnath Roy
It could be a network issue in your environment.. First thing to check is MTU (if you have changed it) and run tool like traceroute to see if all the cluster nodes are reachable from each other.. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of p