Thanks, Ronny, for the suggestion on adapting the cluster to Ceph.
Youssef Eldakar
Bibliotheca Alexandrina
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Ronny Aasen
[ronny+ceph-us...@aasen.cx]
Sent: Thursday, March 16, 2017 19:10
To: c
Hi,
I have a few questions regarding Ceph in case of a failure. My setup
consists of three monitors and two hosts, each of which hosts a couple
of OSDs. Basically it looks like this:
> root@max:~# ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 19.87860 root de
Hi Shain,
what i would do:
take the osd.32 out
# systemctl stop ceph-osd@32
# ceph osd out osd.32
this will cause rebalancing.
to repair/reuse the drive you can do:
# smartctl -t long /dev/sdX
This will start a long self-test on the drive and - i bet - abort this
after a while with somethin
Thanks Jason! The "set" option is quite handy! That did solve the
problem and the daemons seem to be able to talk to their remote
clusters.
On Sat, Mar 18, 2017 at 7:40 PM, Jason Dillaman wrote:
> The log shows that the rbd-mirror daemon is attempting to connect to
> the cluster "ceph3" using the
Might be good if you can attach the full decompiled crushmap so we can see
exactly how things are listed/setup.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Karol
Babioch
Sent: 19 March 2017 20:42
To: ceph-users@lists.ceph.com
Subject: [ceph
cephmailinglist writes:
> e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph
> [...]
> [...] Also at that time one of our pools got a lot of extra data,
> those files where stored with root permissions since we did not
> restarted the Ceph daemons yet, the 'find' in step e
Gregory Farnum writes:
> On Tue, Mar 7, 2017 at 12:43 PM, Kent Borg wrote:
>> I would love it if someone could toss out some examples of the sorts
>> of things snapshots are good for and the sorts of things they are
>> terrible for. (And some hints as to why, please.)
> They're good for CephFS s
> Op 17 maart 2017 om 8:39 schreef Özhan Rüzgar Karaman
> :
>
>
> Hi;
> Yesterday i started to upgrade my Ceph environment from 0.94.9 to 0.94.10.
> All monitor servers upgraded successfully but i experience problems on
> starting upgraded OSD daemons.
>
> When i try to start an Ceph OSD Daemo
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Simon Leinen
> Sent: 19 March 2017 17:23
> To: Gregory Farnum
> Cc: ceph-users
> Subject: Re: [ceph-users] Snapshot Costs
>
> Gregory Farnum writes:
> > On Tue, Mar 7, 2017 at 12:43 PM, Kent
Hi,
just for the sake of completeness, here is my decompiled CRUSH map in
case it is needed to further investigate:
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> tunable chooseleaf_v
Hi
I want to test the performance for Ceph with RDMA, so I build the ceph with
RDMA and deploy into my test environment manually.
I use the fio for my performance evaluation and it works fine if the Cepu
use the *async + posix* as its ms_type.
After changing the ms_type from *async + posix* to *a
Hello,
you do realize that you very much have a corner case setup there, right?
Ceph works best and as expected when you have a replication of 3 and as
least 3 OSD servers, them having enough capacity (space) to handle the loss
of one node.
That being said, if you'd search the archives, a simi
12 matches
Mail list logo