> Op 24 januari 2017 om 22:08 schreef Goncalo Borges
> :
>
>
> Hi Jorge
> Indeed my advice is to configure your high memory mds as a standby mds. Once
> you restart the service in the low memory mds, the standby one should take
> over without downtime and the first one becomes the standby one
lee_yiu_ch...@yahoo.com 於 18/1/2017 11:17 寫道:
Dear all,
I have a ceph installation (dev site) with two nodes, each running a mon daemon
and osd daemon.
(Yes, I know running a cluster of two mon is bad, but I have no choice since I
only have two nodes.)
Now, the two nodes are migrated to ano
Perhaps a deep scrub will cause a scrub error Which you can try to ceph pg
repair?
Btw. It seems that you use 2 replicas Which is not recommendet except for dev
environments.
Am 24. Januar 2017 22:58:14 MEZ schrieb Richard Bade :
>Hi Everyone,
>I've got a strange one. After doing a reweight of
Hi Everyone,
I've got a strange one. After doing a reweight of some osd's the other
night our cluster is showing 1pg stuck unclean.
2017-01-25 09:48:41 : 1 pgs stuck unclean | recovery 140/71532872
objects degraded (0.000%) | recovery 2553/71532872 objects misplaced
(0.004%)
When I query the pg i
Hi Jorge
Indeed my advice is to configure your high memory mds as a standby mds. Once
you restart the service in the low memory mds, the standby one should take over
without downtime and the first one becomes the standby one.
Cheers
Goncalo
From: ceph-user
just my own experience on this---
I have two MDS servers running (since I run cephFS). I have the config
dictating both MDS servers in the ceph.conf file.
When I issue a "ceph -s" I see the following:
1/1/1 up {0=alpha=up:active}, 1 up:standby
I have shut one MDS server down (current active
I have been using a ceph-mds server that has low memory. I want to
replace it with a new system that has a lot more memory. How does one go
about replacing the ceph-mds server? I looked at the documentation,
figuring I could remove the current metadata server and add the new one,
but the remove
On 01/24/2017 09:38 AM, Florent B wrote:
Hi everyone,
I run a Ceph Jewel cluster over 3 nodes having 3 Samsung 256GB SSD each
(9 OSD total).
I use it for RBD disks for my VMs.
It has run nice for a few weeks, then suddenly whole cluster is
extremely slow, Ceph is reporting blocked requests,
All OSD´s and Monitors are up from what I can see.
I read through the troubleshooting like mentioned in the ceph documentation for
PGs and came to the conclusion that nothing there would help me, so I didn´t
try anything - except restarting / rebooting OSD´s and Monitors.
How do I recover from t
On Mon, Jan 23, 2017 at 11:47 PM, int32bit wrote:
> I'm a new comer of Ceph, I deployed two ceph cluster, and one of which is
> used as mirror cluster. When I created an image, I found that the primary
> image blocked in 'up+stopped' status and the non-primary image's status is
> 'up+syncing`. I'm
>From my observations since there is no documentation about it, syncing
means it's continually copying new changes to the mirror... it's "ok".
Stop the mirror daemon to see what it looks like when it's not ok...
says something like unknown, or some word like stale.
To remove the image, you have to
linux-stable/Documentation/oops-tracing.txt:
> 8: 'D' if the kernel has died recently, i.e. there was an OOPS or BUG.
> 15: 'L' if a soft lockup has previously occurred on the system.
Your first entry already has D and L... can you try to get the first one
before D or L were flagged?
What your l
12 matches
Mail list logo