Hi All.
Hosts: Dell R815x5, 128 GB RAM, 25 OSD + 5 SSD(journal+system).
Network: 2x10Gb+LACP
Kernel: 2.6.32
QEMU emulator version 1.4.2, Copyright (c) 2003-2008 Fabrice Bellard
POOLs:
root@kvm05:~# ceph osd dump | grep 'rbd'
pool 5 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
Hallo Bradley, additionally to your question, I'm interesting in the following:
5) can I change all 'type' Ids because adding a new type "host-slow" to
distinguish between OSD's with journal on the same HDD / separate SSD? E.g.
from
type 0 osd
type 1 host
On 2/4/14 17:06 , Craig Lewis wrote:
Now that I've started seeing missing objects, I'm not able to download
objects that should be on the slave if replication is up to date.
Either it's not up to date, or it's skipping objects every pass.
Using my --max-entries fix
(https://github.com/ce
I have a test cluster that is up and running. It consists of three mons, and
three OSD servers, with each OSD server having eight OSD's and two SSD's for
journals. I'd like to move from the flat crushmap to a crushmap with typical
depth using most of the predefined types. I have the current c
Does anybody else think there is a problem with the docs/settings here...
> Message: 13
> Date: Thu, 06 Feb 2014 12:11:53 +0100 (CET)
> From: Alexandre DERUMIER
> To: Graeme Lambert
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RBD Caching - How to enable?
> Message-ID:
> Content
Great!
Thanks for Your help.
--
Regards
Dominik
2014-02-06 21:10 GMT+01:00 Sage Weil :
> On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
>> Hi,
>> Thanks !!
>> Can You suggest any workaround for now?
>
> You can adjust the crush weights on the overfull nodes slightly. You'd
> need to do it by hand,
On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
> Hi,
> Thanks !!
> Can You suggest any workaround for now?
You can adjust the crush weights on the overfull nodes slightly. You'd
need to do it by hand, but that will do the trick. For example,
ceph osd crush reweight osd.123 .96
(if the current
Am 06.02.2014 16:24, schrieb Mark Nelson:
> Hi Christian, can you tell me a little bit about how you are using Ceph and
> what kind of IO you are doing?
Sure. We're using it almost exclusively for serving VM images that are
accessed from Qemu's built-in RBD client. The VMs themselves perform a ver
Hi John,
The 50/50 thing comes from the way the Ceph OSD writes data twice:
first to the journal, and then subsequently to the data partition.
The write doubling may not affect your performance outcome, depending
on the ratio of drive bandwidth to network bandwidth and the I/O
pattern. In configu
Hey all, I'm currently pouring through the ceph docs trying to familiarize
myself with the product before I begin my cluster build-out for a virtualized
environment. One area which I've been looking into is disk
throughput/performance.
I stumbled onto the following site:
http://www.sebastien-ha
Hi,
Thanks !!
Can You suggest any workaround for now?
--
Regards
Dominik
2014-02-06 18:39 GMT+01:00 Sage Weil :
> Hi,
>
> Just an update here. Another user saw this and after playing with it I
> identified a problem with CRUSH. There is a branch outstanding
> (wip-crush) that is pending review
Hi,
Just an update here. Another user saw this and after playing with it I
identified a problem with CRUSH. There is a branch outstanding
(wip-crush) that is pending review, but it's not a quick fix because of
compatibility issues.
sage
On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
> Hi,
>
Hi,
I have to "open" our CEPH cluster for some clients, that only support
kernel rbd. In general that's no problem and works just fine (verified
in our test-cluster ;-) ). I then tried to map images from our
production cluster and failed: rbd: add failed: (95) Operation not supported
After some te
On 02/06/2014 04:17 AM, Christian Kauhaus wrote:
Hi,
after running Ceph for a while I see a lot of fragmented files on our OSD
filesystems (all running ext4). For example:
itchy ~ # fsck -f /srv/ceph/osd/ceph-5
fsck von util-linux 2.22.2
e2fsck 1.42 (29-Nov-2011)
[...]
/dev/mapper/vgosd00-ceph-
Hi,
Mabye this info can help to find what is wrong.
For one PG (3.1e4a) which is active+remapped:
{ "state": "active+remapped",
"epoch": 96050,
"up": [
119,
69],
"acting": [
119,
69,
7],
Logs:
On osd.7:
2014-02-04 09:45:54.966913 7fa618afe700 1 osd.7 p
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before but should've been fixed way before now?
rbd-fuse -d -p libvirt-pool -c /etc/ceph/ceph.conf ceph
FUSE library version: 2
On Thu, Feb 6, 2014 at 12:11 PM, Alexandre DERUMIER wrote:
>>>Do the VMs using RBD images need to be restarted at all?
> I think yes.
In our case, we had to restart the hypervisor qemu-kvm process to
enable caching.
Cheers, Dan
___
ceph-users mailing l
Hi,
Our three radosgw's are OpenStack VMs. Seems to work for our (limited)
testing, and I don't see a reason why it shouldn't work.
Cheers, Dan
-- Dan van der Ster || Data & Storage Services || CERN IT Department --
On Thu, Feb 6, 2014 at 2:12 PM, Dominik Mostowiec
wrote:
> Hi Ceph Users,
> Wha
Hi Ceph Users,
What do you think about virtualization of the radosgw machines?
Have somebody a production level experience with such architecture?
--
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
>>OK, so I need to change ceph.conf on the compute nodes?
yes.
>>Do the VMs using RBD images need to be restarted at all?
I think yes.
>>Anything changed in the virsh XML for the nodes?
you need to add cache=writeback for your disks
If you use qemu > 1.2, no need to add "rbd cache = true"
Hi Alexandre,
OK, so I need to change ceph.conf on the compute nodes? Do the VMs
using RBD images need to be restarted at all? Anything changed in the
virsh XML for the nodes?
Best regards
*Graeme*
On 06/02/14 10:50, Alexandre DERUMIER wrote:
The documentation states that setting "rbd c
>> The documentation states that setting "rbd cache = true" in [global] enables
>> it, but doesn't elaborate on whether you need to restart any Ceph processe
It's on the client side ! (so no need to restart ceph daemons)
- Mail original -
De: "Graeme Lambert"
À: ceph-users@lists.c
Hi,
I've got a few VMs in Ceph RBD that are running very slowly - presumably
down to a backfill after increasing the pg_num of a big pool.
Would RBD caching resolve that issue? If so, how do I enable it? The
documentation states that setting "rbd cache = true" in [global] enables
it, but do
Hi,
after running Ceph for a while I see a lot of fragmented files on our OSD
filesystems (all running ext4). For example:
itchy ~ # fsck -f /srv/ceph/osd/ceph-5
fsck von util-linux 2.22.2
e2fsck 1.42 (29-Nov-2011)
[...]
/dev/mapper/vgosd00-ceph--osd00: 461903/418119680 files (33.7%
non-contiguou
Hi All
At the moment I have an issue with a backfill process. After a disk failure,
I brought the new disk online and have started to increase the weight of the
disk step by step. But now I have a warning and the backfill process stops.
ceph health detail
HEALTH_WARN 4 pgs backfill; 1 pgs
25 matches
Mail list logo