On Fri, May 06, 2016 at 03:41:34PM -0400, Sage Weil wrote:
> This PR
>
> https://github.com/ceph/ceph/pull/8975
>
> removes the 'rados cppool' command. The main problem is that the command
> does not make a faithful copy of all data because it doesn't preserve the
> snapshots (and snapsh
Hi all,
I've got servers (Dell R730xd) with a number of drives in connected to a
Dell H730 RAID controller. I'm trying to make a decision about whether I
should put the drives in "Non-RAID" mode, or if I should make individual
RAID 0 arrays for each drive.
Going for the RAID 0 approach would mean
CfP 11th Workshop on Virtualization in High-Performance Cloud
Computing (VHPC '16)
CALL FOR PAPERS
11th Workshop on Virtualization in HighÂ-Performance Cloud Computing
(VHPC '16) held in conjunction with the International Su
Thank you Mark for your respond,
The problem caused by some kernel issues. I installed Jewel version on
CentOS 7 with 3.10 kernel, and it seems 3.10 is too old for Ceph Jewel so
with upgrading to kernel 4.5.2, everything fixed and works perfectly.
Regards,
Roozbeh
On May 3, 2016 21:13, "Mark Nels
Interesting, we've seen some issues with aio_submit and NVMe cards with
3.10, but haven't seen any issues with spinning disks.
Mark
On 05/07/2016 01:00 PM, Roozbeh Shafiee wrote:
Thank you Mark for your respond,
The problem caused by some kernel issues. I installed Jewel version on
CentOS 7 w
Hi,
I saw this tip in the troubleshooting section:
DO NOT mount kernel clients directly on the same node as your Ceph Storage
Cluster,
because kernel conflicts can arise. However, you can mount kernel clients within
virtual machines (VMs) on a single node.
Does this mean having a converged depl
As the tip said, you should not use rbd via kernel module on an OSD host
However, using it with userspace code (librbd etc, as in kvm) is fine
Generally, you should not have both:
- "server" in userspace
- "client" in kernelspace
On 07/05/2016 22:13, K.C. Wong wrote:
> Hi,
>
> I saw this tip i
Hey Cephers.
So I have been thinking about migrating my Ceph cluster from Ubuntu to
CentOS.
I have a lot more experience with CentOS and RHEL.
What would be the best path to do this in your opinions?
My overall thought would be to rebuild my mons with CentOS and upgrade the
kernel and finally m
On Fri, May 6, 2016 at 2:14 PM, Eric Eastman
wrote:
> As it should be working, I will increase the logging level in my
> smb.conf file and see what info I can get out of the logs, and report back.
Setting the log level = 20 in my smb.conf file, and trying to add an
additional user to a directory