Re: [ceph-users] removing 'rados cppool' command

2016-05-07 Thread Mykola Golub
On Fri, May 06, 2016 at 03:41:34PM -0400, Sage Weil wrote: > This PR > > https://github.com/ceph/ceph/pull/8975 > > removes the 'rados cppool' command. The main problem is that the command > does not make a faithful copy of all data because it doesn't preserve the > snapshots (and snapsh

[ceph-users] OSD - single drive RAID 0 or JBOD?

2016-05-07 Thread Tim Bishop
Hi all, I've got servers (Dell R730xd) with a number of drives in connected to a Dell H730 RAID controller. I'm trying to make a decision about whether I should put the drives in "Non-RAID" mode, or if I should make individual RAID 0 arrays for each drive. Going for the RAID 0 approach would mean

[ceph-users] CfP 11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '16) (deadline extended May 20th)

2016-05-07 Thread VHPC 16
CfP 11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '16) CALL FOR PAPERS 11th Workshop on Virtualization in High­-Performance Cloud Computing (VHPC '16) held in conjunction with the International Su

Re: [ceph-users] Ceph Read/Write Speed

2016-05-07 Thread Roozbeh Shafiee
Thank you Mark for your respond, The problem caused by some kernel issues. I installed Jewel version on CentOS 7 with 3.10 kernel, and it seems 3.10 is too old for Ceph Jewel so with upgrading to kernel 4.5.2, everything fixed and works perfectly. Regards, Roozbeh On May 3, 2016 21:13, "Mark Nels

Re: [ceph-users] Ceph Read/Write Speed

2016-05-07 Thread Mark Nelson
Interesting, we've seen some issues with aio_submit and NVMe cards with 3.10, but haven't seen any issues with spinning disks. Mark On 05/07/2016 01:00 PM, Roozbeh Shafiee wrote: Thank you Mark for your respond, The problem caused by some kernel issues. I installed Jewel version on CentOS 7 w

[ceph-users] How to avoid kernel conflicts

2016-05-07 Thread K.C. Wong
Hi, I saw this tip in the troubleshooting section: DO NOT mount kernel clients directly on the same node as your Ceph Storage Cluster, because kernel conflicts can arise. However, you can mount kernel clients within virtual machines (VMs) on a single node. Does this mean having a converged depl

Re: [ceph-users] How to avoid kernel conflicts

2016-05-07 Thread ceph
As the tip said, you should not use rbd via kernel module on an OSD host However, using it with userspace code (librbd etc, as in kvm) is fine Generally, you should not have both: - "server" in userspace - "client" in kernelspace On 07/05/2016 22:13, K.C. Wong wrote: > Hi, > > I saw this tip i

[ceph-users] Migrating from Ubuntu to CentOS / RHEL

2016-05-07 Thread Tu Holmes
Hey Cephers. So I have been thinking about migrating my Ceph cluster from Ubuntu to CentOS. I have a lot more experience with CentOS and RHEL. What would be the best path to do this in your opinions? My overall thought would be to rebuild my mons with CentOS and upgrade the kernel and finally m

Re: [ceph-users] ACL support in Jewel using fuse and SAMBA

2016-05-07 Thread Eric Eastman
On Fri, May 6, 2016 at 2:14 PM, Eric Eastman wrote: > As it should be working, I will increase the logging level in my > smb.conf file and see what info I can get out of the logs, and report back. Setting the log level = 20 in my smb.conf file, and trying to add an additional user to a directory