[ceph-users] Re: EC Pools w/ RBD - IOPs

2020-02-14 Thread Frank Schilder
With k=2 and m=1 you cannot do any maintenance with redundancy. Any fail requires immediate attention. Its a recipe for data loss. There have been lengthy discussions about why, in production, min_size>=k+1 is recommended and why min_size=k is not. For any k, with m=1 you either have service ou

[ceph-users] Extended security attributes on cephfs (nautilus) not working with kernel 5.3

2020-02-14 Thread Stolte, Felix
Hi guys, I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs acls in the security extended attributes. This works fine using cephfs kernel mount wither kernel version 4.15. Using kernel 5.3 I cannot access the security.ntacl attributes anymore. Attributes in user or ceph

[ceph-users] Re: Ceph and Windows - experiences or suggestions

2020-02-14 Thread bernard
Lars Täuber wrote: > I got the task to connect a Windows client to our existing ceph cluster. Tiger Bridge will extend your NTFS (or ReFS) file system into a Ceph cluster seamlessly. This is a software only gateway with an HSM tiering and synchronization engine that turns inactive files on the N

[ceph-users] Strange speed issues with XFS and very small writes

2020-02-14 Thread Arvydas Opulskis
Hi, Cephers. I would like to hear your ideas about strange situation we have in one of our clusters. It's Luminous 12.2.12 cluster. Recently we added 3 nodes with 10x SSD OSDs to it and dedicated them to SSD pool for our OpenStack volumes. Initial tests went well, IOPS were great, throughput was

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-14 Thread Eugen Block
Hi Marco, the MDS cache size is depending heavily on the load and the number of clients that access your cephFS (as always I'd say). The mentioned 4 GB of RAM is appropriate for a few clients with no special requirements regarding performance, so basically it's a minimal sizing (as the depl

[ceph-users] centos7 / nautilus where to get kernel 5.5 from?

2020-02-14 Thread Marc Roos
I have default centos7 setup with nautilus. I have been asked to install 5.5 to check a 'bug'. Where should I get this from? I read that the elrepo kernel is not compiled like rhel. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe se

[ceph-users] Re: centos7 / nautilus where to get kernel 5.5 from?

2020-02-14 Thread Ilya Dryomov
On Fri, Feb 14, 2020 at 3:19 PM Marc Roos wrote: > > > I have default centos7 setup with nautilus. I have been asked to install > 5.5 to check a 'bug'. Where should I get this from? I read that the > elrepo kernel is not compiled like rhel. Hi Marc, I'm not sure what you mean by "not compiled li

[ceph-users] Re: Extended security attributes on cephfs (nautilus) not working with kernel 5.3

2020-02-14 Thread Ilya Dryomov
On Fri, Feb 14, 2020 at 12:20 PM Stolte, Felix wrote: > > Hi guys, > > I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs > acls in the security extended attributes. This works fine using cephfs kernel > mount wither kernel version 4.15. > > Using kernel 5.3 I cannot acce

[ceph-users] Announcing go-ceph v0.2.0

2020-02-14 Thread John Mulligan
I'm happy to announce the very first formal release of the go-ceph API bindings. https://github.com/ceph/go-ceph/releases/tag/v0.2.0 They aim to play a similar role to the "pybind" python bindings in the ceph tree but for the Go language. These API bindings require the use of cgo. There are a

[ceph-users] Re: slow using ISCSI - Help-me

2020-02-14 Thread Mike Christie
On 02/13/2020 08:52 PM, Gesiel Galvão Bernardes wrote: > Hi > > Em dom., 9 de fev. de 2020 às 18:27, Mike Christie > escreveu: > > On 02/08/2020 11:34 PM, Gesiel Galvão Bernardes wrote: > > Hi, > > > > Em qui., 6 de fev. de 2020 às 18:56, Mike Christie

[ceph-users] Re: slow using ISCSI - Help-me

2020-02-14 Thread Mike Christie
On 02/14/2020 10:25 AM, Mike Christie wrote: > On 02/13/2020 08:52 PM, Gesiel Galvão Bernardes wrote: >> Hi >> >> Em dom., 9 de fev. de 2020 às 18:27, Mike Christie > > escreveu: >> >> On 02/08/2020 11:34 PM, Gesiel Galvão Bernardes wrote: >> > Hi, >> > >>

[ceph-users] Learning Ceph - Workshop ideas for entry level

2020-02-14 Thread Ignacio Ocampo
Hi all, A group of friends and my self are documenting a hands-on workshop about Ceph https://github.com/Nafiux/ceph-workshop, for learning purposes. The idea is to provide visibility step-by-step on common scenarios, from basic usage, to disaster and recovery scenarios. We will hold a workshop

[ceph-users] Re: Learning Ceph - Workshop ideas for entry level

2020-02-14 Thread Bob Wassell
I’ve found that having > 1 drive controller in vagrant is problematic, although i do agree it would be preferred logically for > 1 osd per node. That being said, have you all run into the problems that i’ve seen with > 1 hdd controller? Namely, the inability to use vagrant's up command afte

[ceph-users] Re: Bucket rename with

2020-02-14 Thread J. Eric Ivancich
On 2/4/20 12:29 PM, EDH - Manuel Rios wrote: > Hi > > Some Customer asked us for a normal easy problem, they want rename a bucket. > > Checking the Nautilus documentation looks by now its not possible, but I > checked master documentation and a CLI should be accomplish this apparently. > > $ rados

[ceph-users] Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-02-14 Thread Mike Christie
On 02/13/2020 09:56 AM, Salsa wrote: > I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3 > replica rbd pool and some images and presented them to a Vmware host via > ISCSI, but the write performance is so bad the I managed to freeze a VM doing > a big rsync to a datastore

[ceph-users] Re: Learning Ceph - Workshop ideas for entry level

2020-02-14 Thread Ignacio Ocampo
Hi Bob, The way I workaround that issue is checking if the file for each hard disk already exists, if not, I create it. Take a look here: https://github.com/Nafiux/ceph-workshop/blob/master/Vagrantfile#L41-L57 That allows me to do vagrant up/halt multiple times without any problem. Thanks! On

[ceph-users] Monitor / MDS distribution over WAN

2020-02-14 Thread Brian Topping
I had posted about some of this a year ago in [1] and got some really helpful answers. Fortunately, I know a lot more now and feel a lot more comfortable with the scenario. Because I didn’t understand the architecture very well, I took a pause on distributing monitors and MDS over a WAN. I want

[ceph-users] bluestore compression questions

2020-02-14 Thread Andras Pataki
We're considering using bluestore compression for some of our data, and I'm not entirely sure how to interpret compression results.  As an example, one of the osd perf dump results shows:     "bluestore_compressed": 28089935,     "bluestore_compressed_allocated": 115539968,     "blu

[ceph-users] Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-02-14 Thread Salsa
‐‐‐ Original Message ‐‐‐ On Friday, February 14, 2020 4:49 PM, Mike Christie wrote: > On 02/13/2020 09:56 AM, Salsa wrote: > > > I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3 > > replica rbd pool and some images and presented them to a Vmware host via > > ISCS

[ceph-users] Re: Bucket rename with

2020-02-14 Thread EDH - Manuel Rios
Honestly, not having a function to rename bucket from admin rgw-admin is like not having a function to copy or move. It is something basic, since if not the workarround, it is to create a new bucket and move all the files with the consequent loss of time and cost of computation. In addition to t

[ceph-users] Re: Bucket rename with

2020-02-14 Thread Matt Benjamin
The world lived for a long time without it, but it's certainly useful. If Abhishek and/or the backport team would like this for Nautilus, I will help retarget our downstream backport (it's bigger than 22 commits with the dependencies, I believe, n.b.). Matt On Fri, Feb 14, 2020 at 5:02 PM EDH - M