Re: [ceph-users] RBD Block performance vs rbd mount as filesystem

2016-11-07 Thread Bill WONG
Hi Alexandre, if i complied the ceph from soruce, then i cannot use ceph-deploy to install the cluster, everything need to handle from myself. as i am running CentOS 7, and it looks ceph suggest to use ceph-deploy to deploy the cluster. is there any pre-complied or enable --with-jemalloc by defaul

Re: [ceph-users] RBD Block performance vs rbd mount as filesystem

2016-11-07 Thread Bill WONG
hi alexandre, for qemu with --with-jemalloc to work with Ceph, it looks there is no pre-complied package for CentOS 7, also need to manually install. any ideas i can rpm-rebuild from source RPMs? On Mon, Nov 7, 2016 at 2:49 PM, Alexandre DERUMIER wrote: > Also, if you really to get more iops fr

[ceph-users] forward cache mode support?

2016-11-07 Thread Henrik Korkuc
Hey, trying to activate forward mode for cache pool results in "Error EPERM: 'forward' is not a well-supported cache mode and may corrupt your data. pass --yes-i-really-mean-it to force." Change for this message was introduced few months ago and I didn't manage to find reason for that? Were

Re: [ceph-users] Monitors stores not trimming after upgrade from Dumpling to Hammer

2016-11-07 Thread Wido den Hollander
> Op 4 november 2016 om 2:05 schreef Joao Eduardo Luis : > > > On 11/03/2016 06:18 PM, w...@42on.com wrote: > > > >> Personally, I don't like this solution one bit, but I can't see any other > >> way without a patched monitor, or maybe ceph_monstore_tool. > >> > >> If you are willing to wait ti

Re: [ceph-users] VM disk operation blocked during OSDs failures

2016-11-07 Thread fcid
Thanks Christian, I'm using a pool with size 3, min_size 1. I can see the cluster serving I/O in a degraded after the OSD is marked down, but the problem we have is in the interval between the OSD failure event and the moment when that OSD is marked down. In that interval (which can take up

[ceph-users] Create ec pool for rgws

2016-11-07 Thread fridifree
Hi, What is the best way to establish Ceph cluster with ec pools for rgws? Thank you ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] VM disk operation blocked during OSDs failures

2016-11-07 Thread Gregory Farnum
On Mon, Nov 7, 2016 at 5:44 AM, fcid wrote: > Thanks Christian, > > I'm using a pool with size 3, min_size 1. > > I can see the cluster serving I/O in a degraded after the OSD is marked > down, but the problem we have is in the interval between the OSD failure > event and the moment when that OSD

[ceph-users] lost OSDs during upgrade from 10.2.2 to 10.2.3

2016-11-07 Thread Simion Marius Rad
Hello, I have 6 OSDs on two hosts stuck at 10.2.2 version because of xfs corruptions (the ceph-osd services froze while restarting after the upgrade and the ceph-osd processes ended in D state). Because I had to run xfs_repair with the -L argument all those osds are crashing and I cannot update th

[ceph-users] Fwd: Hammer OSD memory increase when add new machine

2016-11-07 Thread Dong Wu
any sugesstions? Thanks. -- Forwarded message -- From: Dong Wu Date: 2016-10-27 18:50 GMT+08:00 Subject: Re: [ceph-users] Hammer OSD memory increase when add new machine To: huang jun 抄送: ceph-users 2016-10-27 17:50 GMT+08:00 huang jun : > how do you add the new machine ? >

Re: [ceph-users] RBD Block performance vs rbd mount as filesystem

2016-11-07 Thread Alexandre DERUMIER
>>if i complied the ceph from soruce, then i cannot use ceph-deploy to install >>the cluster, everything need to handle from myself. as i am running CentOS 7, >>and it looks ceph suggest to >>use ceph-deploy to deploy the cluster. is >>there any pre-complied or enable --with-jemalloc by default

Re: [ceph-users] RBD Block performance vs rbd mount as filesystem

2016-11-07 Thread Alexandre DERUMIER
>>for qemu with --with-jemalloc to work with Ceph, it looks there is no >>pre-complied package for CentOS 7, also need to manually install. any ideas i >>can rpm-rebuild from source RPMs? Sorry, Can't help, I'm using debian and I really don't known how to build rpm ;) BTW, if you want to test i

Re: [ceph-users] Bluestore + erasure coding memory usage

2016-11-07 Thread bobobo1...@gmail.com
Just bumping this and CCing directly since I foolishly broke the threading on my reply. On 4 Nov. 2016 8:40 pm, "bobobo1...@gmail.com" wrote: > > Then you can view the output data with ms_print or with > massif-visualizer. This may help narrow down where in the code we are > using the memory. >

Re: [ceph-users] Replication strategy, write throughput

2016-11-07 Thread Andreas Gerstmayr
2016-11-07 3:05 GMT+01:00 Christian Balzer : > > Hello, > > On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote: > >> Hello, >> >> I'd like to understand how replication works. >> In the paper [1] several replication strategies are described, and >> according to a (bit old) mailing list post