If you have a backfillfull, no pg's will be able to migrate.
Better is to just add harddrives, because at least one of your osd's is
to full.
I know you can set the backfillfull ratio's with commands like these
ceph tell osd.* injectargs '--mon_osd_full_ratio=0.97'
ceph tell osd.* injectar
I've heard of the same(?) problem on another cluster; they upgraded
from 12.2.7 to 12.2.10 and suddenly got problems with their CephFS
(and only with the CephFS).
However, they downgraded the MDS to 12.2.8 before I could take a look
at it, so not sure what caused the issue. 12.2.8 works fine with t
On 20/01/2019 05.50, Brian Topping wrote:
> My main constraint is I had four disks on a single machine to start with
> and any one of the disks should be able to fail without affecting the
> ability for the machine to boot, the bad disk replaced without requiring
> obscure admin skills, and the fin
День добрий!
Fri, Jan 18, 2019 at 11:02:51PM +, robbat2 wrote:
> On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote:
> > Dear colleagues,
> >
> > we build L3 topology for use with CEPH, which is based on OSPF routing
> > between Loopbacks, in order to get reliable and ECMPed
On Sun, Jan 20, 2019 at 08:54:57PM +, Max Krasilnikov wrote:
> День добрий!
>
> Fri, Jan 18, 2019 at 11:02:51PM +, robbat2 wrote:
>
> > On Fri, Jan 18, 2019 at 12:21:07PM +, Max Krasilnikov wrote:
> > > Dear colleagues,
> > >
> > > we build L3 topology for use with CEPH, which is
Hello!
Sun, Jan 20, 2019 at 09:00:22PM +, robbat2 wrote:
> > > > we build L3 topology for use with CEPH, which is based on OSPF routing
> > > > between Loopbacks, in order to get reliable and ECMPed topology, like
> > > > this:
> > > ...
> > > > CEPH configured in the way
> > > You have a
On Sun, Jan 20, 2019 at 09:05:10PM +, Max Krasilnikov wrote:
> > Just checking, since it isn't mentioned here: Did you explicitly add
> > public_network+cluster_network as empty variables?
> >
> > Trace the code in the sourcefile I mentioned, specific to your Ceph
> > version, as it has change
I have a process stuck in D+ writing to cephfs kernel mount. Anything
can be done about this? (without rebooting)
CentOS Linux release 7.5.1804 (Core)
Linux 3.10.0-514.21.2.el7.x86_64
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
Hi,
to be more precise, netstat table looks as in the following snippet:
tcp 0 0 10.10.200.5:6815 10.10.25.4:43788 ESTABLISHED
51981/ceph-osd
tcp 0 0 10.10.15.2:41020 10.10.200.8:6813 ESTABLISHED
51981/ceph-osd
tcp 0 0 10.10.15.2:48724 10.10.20
check /proc//stack to find where it is stuck
On Mon, Jan 21, 2019 at 5:51 AM Marc Roos wrote:
>
>
> I have a process stuck in D+ writing to cephfs kernel mount. Anything
> can be done about this? (without rebooting)
>
>
> CentOS Linux release 7.5.1804 (Core)
> Linux 3.10.0-514.21.2.el7.x86_64
>
>
It's http://tracker.ceph.com/issues/37977. Thanks for your help.
Regards
Yan, Zheng
On Sun, Jan 20, 2019 at 12:40 AM Adam Tygart wrote:
>
> It worked for about a week, and then seems to have locked up again.
>
> Here is the back trace from the threads on the mds:
> http://people.cs.ksu.edu/~moze
Dear Ceph Users,
We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
harddisk. Ceph version is luminous 12.2.5. We created one data pool with
these hard disks and created another meta data pool with 3 ssd. We created
a MDS with 65GB cache size.
But our users are keep complaining
On Mon, Jan 21, 2019 at 11:16 AM Albert Yue wrote:
>
> Dear Ceph Users,
>
> We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
> harddisk. Ceph version is luminous 12.2.5. We created one data pool with
> these hard disks and created another meta data pool with 3 ssd. We create
Hi Yan Zheng,
1. mds cache limit is set to 64GB
2. we get the size of meta data pool by running `ceph df` and saw meta data
pool just used 200MB space.
Thanks,
On Mon, Jan 21, 2019 at 11:35 AM Yan, Zheng wrote:
> On Mon, Jan 21, 2019 at 11:16 AM Albert Yue
> wrote:
> >
> > Dear Ceph Users,
>
Hi all, looks like I might have pooched something. Between the two nodes I
have, I moved all the PGs to one machine, reformatted the other machine,
rebuilt that machine, and moved the PGs back. In both cases, I did this by
taking the OSDs on the machine being moved from “out” and waiting for hea
On Mon, 7 Jan 2019 at 21:04, Patrick Donnelly wrote:
> Hello Mahmoud,
>
> On Fri, Dec 21, 2018 at 7:44 AM Mahmoud Ismail
> wrote:
> > I'm doing benchmarks for metadata operations on CephFS, HDFS, and HopsFS
> on Google Cloud. In my current setup, i'm using 32 vCPU machines with 29 GB
> memory, a
16 matches
Mail list logo