Tom,
I'm no expert as I didn't set it up, but we are using Openstack Grizzly with KVM/QEMU and RBD volumes for VM's.
We boot the VMs from the RBD volumes and it all seems to work just fine.
Migration works perfectly, although live - no break migration only works from the command line tools. The
I am testing scsi-target-utils tgtd with RBD support.
I have successfully created an iscsi target using RBD as an iscsi target
and tested it.
It backs onto a rados pool iscsi-spin with a RBD called test.
Now I want it to survive a reboot. I have created a conf file
bs-type rbd
Performing yum updates on Fedora 19 now break qemu.
There is a different set of package names and contents between the
default fedora ceph packages and the ceph.com packages.
The is no ceph-libs package in the ceph.com repository and qemu now
enforces the dependency on ceph-libs.
Yum update now pr
What is the recommended MTU for a ceph cluster with gig ethernet.
Should the public interfaces and the cluster interfaces use jumbo frames.
Regards
Darryl
The contents of this electronic message and any attachments are intended only
for the addressee and may contain legally privileged, persona
My ceph cluster consistes of 3 hosts in 3 locations, each with 2 SSD and
4 spinning disks.
I have created a fresh ceph filesystem and start up ceph.
Ceph health report HEALTH_OK.
I created a crushmap to suit our installation where each host will be in
separate racks, based on the example in the
I have a cluster of 3 hosts each with 2 SSD and 4 Spinning disks.
I used the example in th ecrush map doco to create a crush map to place
the primary on the SSD and replica on spinning disk.
If I use the example, I end up with objects replicated on the same host,
if I use 2 replicas.
Question 1,
I have a 3 node ceph cluster with 6 disks in each node.
I upgraded from Bobtail 0.56.3 to 0.56.4 last night.
Before I started the upgrade, ceph status reported HEALTH_OK.
After upgrading and restarting the first node the status ended up at
HEALTH_WARN 133 pgs stale; 133 pgs stuck stale
After check
Ping,
Any ideas? A week later and it is still the same, 300 pgs stuck stale.
I have seen a few references since recommending that there are no gaps
in the OSD numbers. Mine has gaps. Might this the be cause of my problem.
Darryl
On 04/05/13 07:27, Darryl Bond wrote:
I have a 3 node ceph
On 04/16/13 08:50, Dan Mick wrote:
On 04/04/2013 02:27 PM, Darryl Bond wrote:
# ceph pg 3.8 query
pgid currently maps to no osd
That means your CRUSH rules are wrong. What's the crushmap look like,
and what's the rule for pool 3?
# begin crush map
# devices
device 0 device
This can take a long time to clean up and the monitor will not respond
until it is finished.
It seems to start another 2 processes/threads to do the cleanup but the
main monitor will not respond until those threads complete.
I have had to restart one monitor every couple of days since going to
6.
Any plans to build a set of packages for Fedora 19 yet?
F19 has qemu 1.4.2 packaged and we would like to try it with ceph
cuttlefish.
Attempting to install the F18 ceph .6.1.4 bumps into a dependency on
libboost_system-mt.so.1.50.0()(64bit).
The version of libboost on F19 is 1.53 :(
I will have
Upgrading a cluster from 6.1.3 to 6.1.4 with 3 monitors. Cluster had
been successfully upgraded from bobtail to cuttlefish and then from
6.1.2 to 6.1.3. There have been no changes to ceph.conf.
Node mon.a upgrade, a,b,c monitors OK after upgrade
Node mon.b upgrade a,b monitors OK after upgrade (
com/docs/next/rados/operations/add-or-rm-mons/#adding-monitors
- Mike
On 6/25/2013 10:34 PM, Darryl Bond wrote:
Upgrading a cluster from 6.1.3 to 6.1.4 with 3 monitors. Cluster had
been successfully upgraded from bobtail to cuttlefish and then from
6.1.2 to 6.1.3. There have been no changes to
;& sudo
mkdir /var/lib/ceph/mon/ceph-c
- Mike
On 6/25/2013 11:08 PM, Darryl Bond wrote:
Thanks for your prompt response.
Given that my mon.c /var/lib/ceph/mon/ceph-c is currently populated,
should I delete it's contents after removing the monitor and before
re-adding it?
Darryl
On 06
pgmap v4064405: 5448 pgs: 5447 active+clean, 1 active+clean+scrubbing+deep; 5829 GB data, 11691 GB used, 34989 GB / 46681 GB avail; 328B/s rd, 816KB/s wr, 135op/s
mdsmap e1: 0/0/1 up
Looks like there is a fix on the way.
Darryl
On 06/26/13 13:58, Darryl Bond wrote:
Nope, same outcome
15 matches
Mail list logo