Hi,
I have a ceph cluster with 26 osd's in 4 hosts only use for rbd for an
OpenStack cluster (started at 0.48 I think), currently running 0.94.2 on
Ubuntu 14.04. A few days ago one of the osd's was at 85% disk usage while
only 30% of the raw disk space is used. I ran reweight-by-utilization with
1
Hi,
I need to verify in Ceph v9.0.2 if the kernel version of Ceph file
system supports ACLs and the libcephfs file system interface does not.
I am trying to have SAMBA, version 4.3.0rc1, support Windows ACLs
using "vfs objects = acl_xattr" with the SAMBA VFS Ceph file system
interface "vfs objects
Hi Nick,
On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Nick Fisk
>> Sent: 13 August 2015 18:04
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] How to improve single thread se
Have you tried setting read_ahead_kb to bigger number for both client/OSD side
if you are using krbd ?
In case of librbd, try the different config options for rbd cache..
Thanks & Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
On Mon, Aug 17, 2015 at 9:38 AM, Eric Eastman
wrote:
> Hi,
>
> I need to verify in Ceph v9.0.2 if the kernel version of Ceph file
> system supports ACLs and the libcephfs file system interface does not.
> I am trying to have SAMBA, version 4.3.0rc1, support Windows ACLs
> using "vfs objects = acl_
On Sun, Aug 16, 2015 at 9:12 PM, Yan, Zheng wrote:
> On Mon, Aug 17, 2015 at 9:38 AM, Eric Eastman
> wrote:
>> Hi,
>>
>> I need to verify in Ceph v9.0.2 if the kernel version of Ceph file
>> system supports ACLs and the libcephfs file system interface does not.
>> I am trying to have SAMBA, versi
Hi All,
We need to test three OSD and one image with replica 2(size 1GB). While
testing data is not writing above 1GB. Is there any option to write on third
OSD.
ceph osd pool get repo pg_num
pg_num: 126
# rbd showmapped
id pool image snap device
0 rbd integdownlo
Hi All,
Also please find osd information.
ceph osd dump | grep 'replicated size'
pool 2 'repo' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 126 pgp_num 126 last_change 21573 flags hashpspool stripe_width 0
Regards
Prabu
On Mon, 1