Re: [ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-22 Thread Yan, Zheng
On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote: > Hello! > > I'm running Ceph 12.2.2 with one primary and one standby MDS. Mounting > CephFS via ceph-fuse (to leverage quotas), and enabled ACLs by adding > fuse_default_permissions=0 and client_acl_type=posix_acl to the mount > options. I then ex

Re: [ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-22 Thread Anthony D'Atri
> FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn?t work > properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830 > That's an OpenStack bug, nothing to do with Ceph. Nothing stops you from using virsh to throt

Re: [ceph-users] Erasure Coded Pools and OpenStack

2018-03-22 Thread Jason Dillaman
On Fri, Mar 23, 2018 at 8:08 AM, Mike Cave wrote: > Greetings all! > > > > I’m currently attempting to create an EC pool for my glance images, however > when I save an image through the OpenStack command line, the data is not > ending up in the EC pool. > > So a little information on what I’ve don

[ceph-users] Erasure Coded Pools and OpenStack

2018-03-22 Thread Mike Cave
Greetings all! I’m currently attempting to create an EC pool for my glance images, however when I save an image through the OpenStack command line, the data is not ending up in the EC pool. So a little information on what I’ve done so far. The way that I understand things to work is that you nee

Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?

2018-03-22 Thread Yan, Zheng
Did the fs have lots of mount/umount? We recently found a memory leak bug in that area https://github.com/ceph/ceph/pull/20148 Regards Yan, Zheng On Thu, Mar 22, 2018 at 5:29 PM, Alexandre DERUMIER wrote: > Hi, > > I'm running cephfs since 2 months now, > > and my active msd memory usage is aro

Re: [ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-22 Thread Ronny Aasen
keep in mind that with 4+2 =6  erasure coding, ceph can not self heal if a  node dies if you have only 6 nodes. that means that you have a degraded cluster with low performance, and higher risk until you replace or fix or buy a new node. it is kind of like loosing a disk in raid5 you have to scr

[ceph-users] Group-based permissions issue when using ACLs on CephFS

2018-03-22 Thread Josh Haft
Hello! I'm running Ceph 12.2.2 with one primary and one standby MDS. Mounting CephFS via ceph-fuse (to leverage quotas), and enabled ACLs by adding fuse_default_permissions=0 and client_acl_type=posix_acl to the mount options. I then export this mount via NFS and the clients mount NFS4.1. After d

[ceph-users] Ceph talks/presentations at conferences/events

2018-03-22 Thread Kai Wagner
Hi all, don't know if this is the right place to discuss this but I was just wondering if there's any specific mailing list + web site where upcoming events (Ceph/Open Source/Storage) and conferences are discussed and generally tracked? Also I would like to sync upfront on topics that could be in

Re: [ceph-users] DELL R620 - SSD recommendation

2018-03-22 Thread Drew Weaver
Please note that the DC S3700/3710 was discontinued/EOL’d so it may not be a great idea to use those in new deployments as supply will eventually dry up and Intel apparently has no plans to offer a DC S4700 with similar endurance. From: ceph-users On Behalf Of Nghia Than Sent: Thursday, March

Re: [ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-22 Thread Hervé Ballans
Le 21/03/2018 à 11:48, Ronny Aasen a écrit : On 21. mars 2018 11:27, Hervé Ballans wrote: Hi all, I have a question regarding a possible scenario to put both wal and db in a separate SSD device for an OSD node composed by 22 OSDs (HDD SAS 10k 1,8 To). I'm thinking of 2 options (at about the

[ceph-users] ceph mds memory usage 20GB : is it normal ?

2018-03-22 Thread Alexandre DERUMIER
Hi, I'm running cephfs since 2 months now, and my active msd memory usage is around 20G now (still growing). ceph 1521539 10.8 31.2 20929836 20534868 ? Ssl janv.26 8573:34 /usr/bin/ceph-mds -f --cluster ceph --id 2 --setuser ceph --setgroup ceph USER PID %CPU %MEMVSZ RSS TT

[ceph-users] Bluestore cluster, bad IO perf on blocksize<64k... could it be throttling ?

2018-03-22 Thread Frederic BRET
Hi all, We encounter bad IO perfs on blocksize<64k on our new Bluestore test cluster The context : - Test cluster aside production one - Fresh install on Luminous - choice of Bluestore (coming from Filestore) - Default config (including wpq queuing) - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-22 Thread Christian Wuerdig
I think the primary area where people are concerned about latency are rbd and 4k block size access. OTOH 2.3us latency seems to be 2 orders of magnitude below of what seems to be realistically achievable on a real world cluster anyway ( http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/

Re: [ceph-users] IO rate-limiting with Ceph RBD (and libvirt)

2018-03-22 Thread Sinan Polat
FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn’t work properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830 > Op 22 mrt. 2018 om 07:59 heeft Wido den Hollander het > volgende geschreven: > > > >> On 03/21/2018 06:48 PM, Andre Goree wrote: >> I'm trying to det