Hi Alexandre,
if i complied the ceph from soruce, then i cannot use ceph-deploy to
install the cluster, everything need to handle from myself. as i am running
CentOS 7, and it looks ceph suggest to use ceph-deploy to deploy the
cluster. is there any pre-complied or enable --with-jemalloc by defaul
hi alexandre,
for qemu with --with-jemalloc to work with Ceph, it looks there is no
pre-complied package for CentOS 7, also need to manually install. any ideas
i can rpm-rebuild from source RPMs?
On Mon, Nov 7, 2016 at 2:49 PM, Alexandre DERUMIER
wrote:
> Also, if you really to get more iops fr
Hey,
trying to activate forward mode for cache pool results in "Error EPERM:
'forward' is not a well-supported cache mode and may corrupt your data.
pass --yes-i-really-mean-it to force."
Change for this message was introduced few months ago and I didn't
manage to find reason for that? Were
> Op 4 november 2016 om 2:05 schreef Joao Eduardo Luis :
>
>
> On 11/03/2016 06:18 PM, w...@42on.com wrote:
> >
> >> Personally, I don't like this solution one bit, but I can't see any other
> >> way without a patched monitor, or maybe ceph_monstore_tool.
> >>
> >> If you are willing to wait ti
Thanks Christian,
I'm using a pool with size 3, min_size 1.
I can see the cluster serving I/O in a degraded after the OSD is marked
down, but the problem we have is in the interval between the OSD failure
event and the moment when that OSD is marked down.
In that interval (which can take up
Hi,
What is the best way to establish Ceph cluster with ec pools for rgws?
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Nov 7, 2016 at 5:44 AM, fcid wrote:
> Thanks Christian,
>
> I'm using a pool with size 3, min_size 1.
>
> I can see the cluster serving I/O in a degraded after the OSD is marked
> down, but the problem we have is in the interval between the OSD failure
> event and the moment when that OSD
Hello,
I have 6 OSDs on two hosts stuck at 10.2.2 version because of xfs
corruptions (the ceph-osd services froze while restarting after the upgrade
and the ceph-osd processes ended in D state).
Because I had to run xfs_repair with the -L argument all those osds are
crashing and I cannot update
th
any sugesstions?
Thanks.
-- Forwarded message --
From: Dong Wu
Date: 2016-10-27 18:50 GMT+08:00
Subject: Re: [ceph-users] Hammer OSD memory increase when add new machine
To: huang jun
抄送: ceph-users
2016-10-27 17:50 GMT+08:00 huang jun :
> how do you add the new machine ?
>
>>if i complied the ceph from soruce, then i cannot use ceph-deploy to install
>>the cluster, everything need to handle from myself. as i am running CentOS 7,
>>and it looks ceph suggest to >>use ceph-deploy to deploy the cluster. is
>>there any pre-complied or enable --with-jemalloc by default
>>for qemu with --with-jemalloc to work with Ceph, it looks there is no
>>pre-complied package for CentOS 7, also need to manually install. any ideas i
>>can rpm-rebuild from source RPMs?
Sorry, Can't help, I'm using debian and I really don't known how to build rpm ;)
BTW, if you want to test i
Just bumping this and CCing directly since I foolishly broke the threading
on my reply.
On 4 Nov. 2016 8:40 pm, "bobobo1...@gmail.com" wrote:
> > Then you can view the output data with ms_print or with
> massif-visualizer. This may help narrow down where in the code we are
> using the memory.
>
2016-11-07 3:05 GMT+01:00 Christian Balzer :
>
> Hello,
>
> On Fri, 4 Nov 2016 17:10:31 +0100 Andreas Gerstmayr wrote:
>
>> Hello,
>>
>> I'd like to understand how replication works.
>> In the paper [1] several replication strategies are described, and
>> according to a (bit old) mailing list post
13 matches
Mail list logo