Den mån 3 feb. 2020 kl 08:25 skrev Wido den Hollander :
> > The crash happens, when the osd wants to read from pipe when processing
> > heartbeat. To me it sounds like a networking issue.
>
> It could also be that this OSD is so busy internally with other stuff
> that it doesn't respond to heartbe
Dear Konstantin and Patrick,
thanks!
I started migrating a 2-pool layout ceph fs (rep meta, EC default data) to a
3-pool layout (rep meta, rep default data, EC data set at "/") and use
sub-directory mounts for data migration. So far, everything as it should.
Maybe some background info for ever
errata: con-fs2-meta2 is the default data pool.
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 03 February 2020 10:08
To: Patrick Donnelly; Konstantin Shalygin
Cc: ceph-users
Subject: Re: [ceph-users] ceph
On Fri, Jan 31, 2020 at 6:32 PM Ilya Dryomov wrote:
>
> On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster wrote:
> >
> > Hi Ilya,
> >
> > On Fri, Jan 31, 2020 at 11:33 AM Ilya Dryomov wrote:
> > >
> > > On Fri, Jan 31, 2020 at 11:06 AM Dan van der Ster
> > > wrote:
> > > >
> > > > Hi all,
> > >
On Sun, Feb 2, 2020 at 9:35 PM Håkan T Johansson wrote:
>
>
> Changing cp (or whatever standard tool is used) to call fsync() before
> each close() is not an option for a user. Also, doing that would lead to
> terrible performance generally. Just tested - a recursive copy of a 70k
> files linux
On Mon, Feb 3, 2020 at 1:09 AM Frank Schilder wrote:
> Fortunately, I had an opportunity to migrate the ceph fs. For anyone who
> starts new, I would recommend to have the 3-pool layout right from the
> beginning. Never use an EC pool as the default data pool. I would even make
> this statement
Thumbs up for that!
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Patrick Donnelly
Sent: 03 February 2020 11:18
To: Frank Schilder
Cc: Konstantin Shalygin; ceph-users
Subject: Re: [ceph-users] ceph fs dir-layou
On Mon, Feb 3, 2020 at 10:38 AM Dan van der Ster wrote:
>
> On Fri, Jan 31, 2020 at 6:32 PM Ilya Dryomov wrote:
> >
> > On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster
> > wrote:
> > >
> > > Hi Ilya,
> > >
> > > On Fri, Jan 31, 2020 at 11:33 AM Ilya Dryomov wrote:
> > > >
> > > > On Fri, Jan
On Mon, Feb 3, 2020 at 11:50 AM Ilya Dryomov wrote:
>
> On Mon, Feb 3, 2020 at 10:38 AM Dan van der Ster wrote:
> >
> > On Fri, Jan 31, 2020 at 6:32 PM Ilya Dryomov wrote:
> > >
> > > On Fri, Jan 31, 2020 at 4:57 PM Dan van der Ster
> > > wrote:
> > > >
> > > > Hi Ilya,
> > > >
> > > > On Fri,
We have 18 Sata disks (each 2TB) on a physical server, each disk with an
OSD deployed.
I am not sure how much CPU and memory resources should be prepared for this
server.
Does each OSD require a physical CPU? and how to calculate memory usage?
Thanks.
__
We're happy to announce 13th bug fix release of the Luminous v12.2.x
long term stable release series. We recommend that all users upgrade to
this release. Many thanks to all the contributors, in particular Yuri &
Nathan, in getting this release out of the door. This shall be the last
release of th
This is the seventh update to the Ceph Nautilus release series. This is
a hotfix release primarily fixing a couple of security issues. We
recommend that all users upgrade to this release.
Notable Changes
---
* CVE-2020-1699: Fixed a path traversal flaw in Ceph dashboard that
could a
Hi all,
I have small cluster and yesterday I tried to mount older RBD snapshot torecover data. (I have approx. 230 daily snapshots of one RBD image
on my small ceph).
After I did mount and ls operation, cluster was stuck and I notice that 2of my OSD's eaten CPU and raise in memory usage (more
Dear All,
Due to a mistake in my "rolling restart" script, one of our ceph
clusters now has a number of unfound objects:
There is an 8+2 erasure encoded data pool, 3x replicated metadata pool,
all data is stored as cephfs.
root@ceph7 ceph-archive]# ceph health
HEALTH_ERR 24/420880027 objects unf
Hi all,
I really hope this isn't seen as spam. I am looking to find a position
where I can focus on Linux storage/Ceph. If anyone is currently
looking please let me know. Linkedin profile frankritchie.
Thanks,
Frank
___
ceph-users mailing list -- ceph-u
Hello,
I have this message on my new ceph cluster in Nautilus. I have a cephfs with a
copy of ~100TB in progress.
> /var/log/ceph/artemis.log:2020-02-03 16:22:49.970437 osd.66 (osd.66) 1137 :
> cluster [WRN] Large omap object found. Object:
> 8:579bf162:::mds3_openfiles.0:head PG: 8.468fd9ea (
The warning threshold recently changed, I'd just increase it in this
particular case. It just means you have lots of open files.
I think there's some work going on to split the openfiles object into
multiple, so that problem will be fixed.
Paul
--
Paul Emmerich
Looking for help with your Ceph
This might be related to recent problems with OSDs not being queried
for unfound objects properly in some cases (which I think was fixed in
master?)
Anyways: run ceph pg query on the affected PGs, check for "might
have unfound" and try restarting the OSDs mentioned there. Probably
also sufficient
Does anyone have access to libibverbs-debuginfo-22.1-3.el7.x86_64 and
librdmacm-debuginfo-22.1-3.el7.x86_64? I cannot find them in any repo list out
there and the gdbpmp.py requires them.
Thanks,
Joe
___
ceph-users mailing list -- ceph-users@ceph.io
T
So now that 12.2.13 has been released, now I will have a mixed environment if I
use Ubuntu 18.04 repo 12.2.12
I also found there is a docker container https://hub.docker.com/r/ceph/daemon I
could potentially just use the container to run the version I need. Wondering
if anyone has done this in
Hi,
We have a production cluster of 27 OSD's across 5 servers (all SSD's
running bluestore), and have started to notice a possible performance issue.
In order to isolate the problem, we built a single server with a single
OSD, and ran a few FIO tests. The results are puzzling, not that we were
ex
Hello Frank,
we are always looking for Ceph/Linux consultants.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Intensive
Training at https://croit.io/training/4-days-ceph-in-depth-training.
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
C
22 matches
Mail list logo