d to access the disks as RBDs. That sort of VM usage is
> just not how Ceph was designed to host VM disks. I know this doesn't
> answer your question, but I feel like you should have been asking a
> different question.
>
> On Tue, Jun 13, 2017 at 9:43 PM Nathanial Byrnes wrote
e into how Xen is attaching to Ceph vs
gluster
Anyway, thanks again!
Nate
On Tue, Jun 13, 2017 at 5:30 PM, Gregory Farnum wrote:
>
>
> On Thu, Jun 8, 2017 at 11:11 PM Nathanial Byrnes wrote:
>
>> Hi All,
>>First, some background:
>>I have be
Hi All,
First, some background:
I have been running a small (4 compute nodes) xen server cluster
backed by both a small ceph (4 other nodes with a total of 18x 1-spindle
osd's) and small gluster cluster (2 nodes each with a 14 spindle RAID
array). I started with gluster 3-4 years ago, at
10:15 AM, Ruben Kerkhof wrote:
On Sat, Jul 23, 2016 at 3:58 PM, Nathanial Byrnes wrote:
Hi All,
I'm working with a debian 8.5 new install and I'm trying to, without
installing any additional software, mount an RBD image on my cluster using
the kernel module. When I run:
/bin/echo 1
Hi All,
I'm working with a debian 8.5 new install and I'm trying to,
without installing any additional software, mount an RBD image on my
cluster using the kernel module. When I run:
/bin/echo 10.88.28.23 name=admin,secret= xcp-vol-pool1
proxy-img1 > /sys/bus/rbd/add
I see the following
Thanks for the pointer, I didn't know the answer, but now I do, and
unfortunately, XenServer is relying on the kernel module. It's
surprising that their latest release XenServer 7 which was released on
the 6th of July is only using kernel 3.10 ... I guess since it is based
upon CentOS 7 and tha
Hello,
I've got a Jewel Cluster (3 nodes, 15 OSD's) running with bobtail
tunables (my xenserver cluster uses 3.10 as the kernel and there's no
upgrading that). I started the cluster out on Hammer, upgraded to
Jewel, discovered that optimal tunables would not work, and then set the
tuna