Hi Loic,
On Wed, Mar 2, 2016 at 12:32 PM, Loic Dachary wrote:
>
>
> On 02/03/2016 17:15, Odintsov Vladislav wrote:
>> Hi,
>>
>> it looks very strange, that LTS release suddenly stopped support of one of
>> OS'es in the middle of lifecycle. Especially when there are no technical
>> problems.
>>
Hi,
As a workaround you can add "ceph-disk activate-all" to rc.local.
(We use this all the time anyway just in case...)
-- Dan
On Mon, Mar 7, 2016 at 9:38 AM, Martin Palma wrote:
> Hi All,
>
> we are in the middle of patching our OSD servers and noticed that
> after rebooting no OSD disk is mou
. So we are not the only
> one facing this issue :-)
>
> Best,
> Martin
>
> On Mon, Mar 7, 2016 at 10:04 AM, Dan van der Ster wrote:
>> Hi,
>>
>> As a workaround you can add "ceph-disk activate-all" to rc.local.
>> (We use this all the time an
Hi Blair!
Last I heard you should budget for 2-3 fds per OSD. This only affects
Glance in our cloud -- the hypervisors run unlimited as root.
Here's our config in /etc/security/limits.d/91-nproc.conf:
glance softnofile 32768
glance hardnofile 32768
glance softnpro
Hi,
Is there a tracker for this? We just hit the same problem on 10.0.5.
Cheers, Dan
# rpm -q ceph
ceph-10.0.5-0.el7.x86_64
# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
# ceph-disk -v prepare /dev/sdc
DEBUG:ceph-disk:get_dm_uuid /dev/sdc uuid path is /sys/dev/block/8:32/dm/uu
rchestra.run.mira041.stderr:[mira041][WARNING] get_dm_uuid:
> get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
> 2016-03-15T18:49:59.205
> INFO:teuthology.orchestra.run.mira041.stderr:[mira041][WARNING]
> ptype_tobe_for_name: name = data
> 2016-03-15T18:49:59.206
> INFO:teuthology.orche
Hi Sean,
Did you check that the process isn't hitting some ulimits? cat
/proc/`pidof radosgw`/limits and compare with the num processes/num
FDs in use.
Cheers, Dan
On Tue, Mar 29, 2016 at 8:35 PM, seapasu...@uchicago.edu
wrote:
> So an update for anyone else having this issue. It looks like ra
failing.
Instead, the correct solution is to stop the OSD, let Ceph backfill,
then deep-scrub the affected PG.
So I'm curious, why doesn't the OSD exit FAILED when a read fails
during deep scrub (or any time a read fails)? Failed writes certainly
cause the OSD to exit -- why not reads?
On Thu, Apr 21, 2016 at 1:23 PM, Dan van der Ster wrote:
> Hi cephalapods,
>
> In our couple years of operating a large Ceph cluster, every single
> inconsistency I can recall was caused by a failed read during
> deep-scrub. In other words, deep scrub reads an object, the read fai
On Fri, Apr 22, 2016 at 8:07 AM, Christian Balzer wrote:
> On Fri, 22 Apr 2016 06:20:17 +0200 Martin Wilderoth wrote:
>
>> I have a ceph cluster and I will change my journal devices to new SSD's.
>>
>> In some instructions of doing this they refer to a journal file (link to
>> UUID of journal )
>>
We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
It's very stable and if your use-case permits you can set zfs
sync=disabled to get very fast write performance that's tough to beat.
But if you're building something new today and have *only* the NAS
use-case then it would make b
works as expected with scheduler set to noop as it is
> optimized to consume whole, non-shared devices.
>
> Just my 2 cents ;-)
>
> Kevin
>
>
> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster
> :
>>
>> We've done ZFS on RBD in a VM, ex
Hi ceph-users,
Most of our servers have 24 hdds plus 4 ssds.
Any experience how these should be configured to get the best rgw performance?
We have two options:
1) All osds the same, with data on the hdd and block.db on a 40GB
ssd partition
2) Two osd device types: hdd-only for the rgw data
I've noticed the same and have a script to help find these:
https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py
-- dan
On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten wrote:
>
> Hi.
>
> Today I've been looking at upmap and the balancer in upmap mode.
> The balancer has r
this expected behaviour? I would expect Ceph to always try to fix
> degraded things first and foremost. Even "pg force-recover" and "pg
> force-backfill" could not force recovery.
>
> Gr. Stefan
>
>
>
>
> --
> | BIT BV http://www.bit.nl/Kamer
It's not that simple see http://tracker.ceph.com/issues/21672
For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
updated -- so the rpms restart the ceph.target.
What's worse is that this seems to happen before all the new updated
files are in place.
Our 12.2.8 to 12.2.10 upgrad
On Mon, Dec 3, 2018 at 5:00 PM Jan Kasprzak wrote:
>
> Dan van der Ster wrote:
> : It's not that simple see http://tracker.ceph.com/issues/21672
> :
> : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
> : updated -- so the rpms restart the ceph.targ
Hey Abhishek,
We just noticed that the debuginfo is missing for 12.2.10:
http://download.ceph.com/rpm-luminous/el7/x86_64/ceph-debuginfo-12.2.10-0.el7.x86_64.rpm
Did something break in the publishing?
Cheers, Dan
On Tue, Nov 27, 2018 at 3:50 PM Abhishek Lekshmanan wrote:
>
>
> We're happy to a
Luminous has:
osd_scrub_begin_week_day
osd_scrub_end_week_day
Maybe these aren't documented. I usually check here for available option:
https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2533
-- Dan
On Fri, Dec 14, 2018 at 12:25 PM Caspar Smit wrote:
>
> Hi all,
>
> We have op
Hi all,
Bringing up this old thread with a couple questions:
1. Did anyone ever follow up on the 2nd part of this thread? -- is
there any way to cache keystone EC2 credentials?
2. A question for Valery: could you please explain exactly how you
added the EC2 credentials to the local backend (your
Hi Fulvio!
Are you able to query that pg -- which osd is it waiting for?
Also, since you're prepared for data loss anyway, you might have
success setting osd_find_best_info_ignore_history_les=true on the
relevant osds (set it conf, restart those osds).
-- dan
-- dan
On Tue, Dec 18, 2018 at 11
Hi Joao,
Has that broken the Slack connection? I can't tell if its broken or
just quiet... last message on #ceph-devel was today at 1:13am.
-- Dan
On Tue, Dec 18, 2018 at 12:11 PM Joao Eduardo Luis wrote:
>
> All,
>
>
> Earlier this week our IRC channels were set to require users to be
> regis
ames !!!
>
>Ciao ciao
>
> Fulvio
>
> Original Message
> Subject: Re: [ceph-users] Luminous (12.2.8 on CentOS), recover or
> recreate incomplete PG
> From: Dan van der Ster
> To: fulvio.galea...@garr.it
> CC: ceph-users
&g
Hey Andras,
Three mons is possibly too few for such a large cluster. We've had lots of
good stable experience with 5-mon clusters. I've never tried 7, so I can't
say if that would lead to other problems (e.g. leader/peon sync
scalability).
That said, our 1-osd bigbang tests managed with only
On Tue, Jan 8, 2019 at 12:48 PM Thomas Byrne - UKRI STFC
wrote:
>
> For what it's worth, I think the behaviour Pardhiv and Bryan are describing
> is not quite normal, and sounds similar to something we see on our large
> luminous cluster with elderly (created as jewel?) monitors. After large
>
Hi Bryan,
I think this is the old hammer thread you refer to:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/013060.html
We also have osdmaps accumulating on v12.2.8 -- ~12000 per osd at the moment.
I'm trying to churn the osdmaps like before, but our maps are not being trimm
Hi Caspar,
On Thu, Jan 10, 2019 at 1:31 PM Caspar Smit wrote:
>
> Hi all,
>
> I wanted to test Dan's upmap-remapped script for adding new osd's to a
> cluster. (Then letting the balancer gradually move pgs to the new OSD
> afterwards)
Cool. Insert "no guarantees or warranties" comment here.
An
w, London on February 6th with details here:
https://www.meetup.com/Object-Storage-Craft-Beer-London/events/257960715/
<https://www.meetup.com/Object-Storage-Craft-Beer-London/events/257960715/>
We hope to see some of you there!
Jason
Jason Van der Schyff
VP, Opera
On Mon, Jan 14, 2019 at 3:06 PM Massimo Sgaravatto
wrote:
>
> I have a ceph luminous cluster running on CentOS7 nodes.
> This cluster has 50 OSDs, all with the same size and all with the same weight.
>
> Since I noticed that there was a quite "unfair" usage of OSD nodes (some used
> at 30 %, some
.0
> 8 hdd 5.45609 osd.8 up 1.0 1.0
> 9 hdd 5.45609 osd.9 up 1.0 1.0
> [root@ceph-mon-01 ~]#
>
> On Mon, Jan 14, 2019 at 3:13 PM Dan van der Ster wrote:
>>
>> On Mon, Jan 14, 2019 at 3:06
not between racks
> (since the very few resources) ?
> Thanks, Massimo
>
> On Mon, Jan 14, 2019 at 3:29 PM Dan van der Ster wrote:
>>
>> On Mon, Jan 14, 2019 at 3:18 PM Massimo Sgaravatto
>> wrote:
>> >
>> > Thanks for the prompt reply
>> >
&g
Hi Wido,
`rpm -q --scripts ceph-selinux` will tell you why.
It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672
And the problem is worse than you described, because the daemons are
even restarted before all the package files have been updated.
Our procedure on these upg
On Wed, Sep 19, 2018 at 7:01 PM Bryan Stillwell wrote:
>
> > On 08/30/2018 11:00 AM, Joao Eduardo Luis wrote:
> > > On 08/30/2018 09:28 AM, Dan van der Ster wrote:
> > > Hi,
> > > Is anyone else seeing rocksdb mon stores slowly growing to >15GB,
> > &g
On Wed, Jan 16, 2019 at 11:17 PM Patrick Donnelly wrote:
>
> On Wed, Jan 16, 2019 at 1:21 AM Marvin Zhang wrote:
> > Hi CephFS experts,
> > From document, I know multi-fs within a cluster is still experiment feature.
> > 1. Is there any estimation about stability and performance for this feature?
Hi Zheng,
We also just saw this today and got a bit worried.
Should we change to:
diff --git a/src/mds/CInode.cc b/src/mds/CInode.cc
index e8c1bc8bc1..e2539390fb 100644
--- a/src/mds/CInode.cc
+++ b/src/mds/CInode.cc
@@ -2040,7 +2040,7 @@ void CInode::finish_scatter_gather_update(int type)
On Tue, Jan 22, 2019 at 3:33 PM Yan, Zheng wrote:
>
> On Tue, Jan 22, 2019 at 9:08 PM Dan van der Ster wrote:
> >
> > Hi Zheng,
> >
> > We also just saw this today and got a bit worried.
> > Should we change to:
> >
>
> What is the error message
No idea, but maybe this commit which landed in v12.2.11 is relevant:
commit 187bc76957dcd8a46a839707dea3c26b3285bd8f
Author: runsisi
Date: Mon Nov 12 20:01:32 2018 +0800
librbd: fix missing unblock_writes if shrink is not allowed
Fixes: http://tracker.ceph.com/issues/36778
Signed
Hi,
With HEALTH_OK a mon data dir should be under 2GB for even such a large cluster.
During backfilling scenarios, the mons keep old maps and grow quite
quickly. So if you have balancing, pg splitting, etc. ongoing for
awhile, the mon stores will eventually trigger that 15GB alarm.
But the intend
Note that there are some improved upmap balancer heuristics in
development here: https://github.com/ceph/ceph/pull/26187
-- dan
On Tue, Feb 5, 2019 at 10:18 PM Kári Bertilsson wrote:
>
> Hello
>
> I previously enabled upmap and used automatic balancing with "ceph balancer
> on". I got very good
showing > 15G, do I need to run the compact commands
> to do the trimming?
Compaction isn't necessary -- you should only need to restart all
peon's then the leader. A few minutes later the db's should start
trimming.
-- dan
>
> Thanks
> Swami
>
> On Wed, Feb
may not be safe to restart the
> ceph-mon, instead prefer to do the compact on non-leader mons.
> Is this ok?
>
Compaction doesn't solve this particular problem, because the maps
have not yet been deleted by the ceph-mon process.
-- dan
> Thanks
> Swami
>
> On Thu, Feb 7,
On Fri, Feb 1, 2019 at 10:18 PM Neha Ojha wrote:
>
> On Fri, Feb 1, 2019 at 1:09 PM Robert Sander
> wrote:
> >
> > Am 01.02.19 um 19:06 schrieb Neha Ojha:
> >
> > > If you would have hit the bug, you should have seen failures like
> > > https://tracker.ceph.com/issues/36686.
> > > Yes, pglog_hard
On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
>
> On 2/14/19 10:20 AM, Dan van der Ster wrote:
> > On Thu., Feb. 14, 2019, 6:17 a.m. Wido den Hollander >>
> >> Hi,
> >>
> >> On a cluster running RGW only I'm running into Blu
On Thu, Feb 14, 2019 at 12:07 PM Wido den Hollander wrote:
>
>
>
> On 2/14/19 11:26 AM, Dan van der Ster wrote:
> > On Thu, Feb 14, 2019 at 11:13 AM Wido den Hollander wrote:
> >>
> >> On 2/14/19 10:20 AM, Dan van der Ster wrote:
> >>> On T
On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen wrote:
>
> On 15/02/2019 10:39, Ilya Dryomov wrote:
> > On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
> >>
> >> Hi Marc,
> >>
> >> You can see previous designs on the Ceph store:
> >>
> >> https://www.proforma.com/sdscommunitystore
> >
> >
On Fri, Feb 15, 2019 at 12:01 PM Willem Jan Withagen wrote:
>
> On 15/02/2019 11:56, Dan van der Ster wrote:
> > On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen
> > wrote:
> >>
> >> On 15/02/2019 10:39, Ilya Dryomov wrote:
> >>> On
On Thu, Feb 14, 2019 at 2:31 PM Sage Weil wrote:
>
> On Thu, 7 Feb 2019, Dan van der Ster wrote:
> > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy
> > wrote:
> > >
> > > Hi Dan,
> > > >During backfilling scenarios, the mons keep old ma
osd bench, etc)?
>
> On Fri, Feb 15, 2019 at 3:13 PM M Ranga Swami Reddy
> wrote:
> >
> > today I again hit the warn with 30G also...
> >
> > On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote:
> > >
> > > On Thu, 7 Feb 2019, Dan van der Ster wrote:
Hi,
pg-upmap-items became more strict in v12.2.11 when validating upmaps.
E.g., it now won't let you put two PGs in the same rack if the crush
rule doesn't allow it.
Where are OSDs 23 and 123 in your cluster? What is the relevant crush rule?
-- dan
On Wed, Feb 27, 2019 at 9:17 PM Kári Bertilss
e in the same host. So this change should be perfectly
> acceptable by the rule set.
> Something must be blocking the change, but i can't find anything about it in
> any logs.
>
> - Kári
>
> On Thu, Feb 28, 2019 at 8:07 AM Dan van der Ster wrote:
>>
>> Hi,
>>
Hi all,
We have an S3 cluster with >10 million objects in default.rgw.meta.
# radosgw-admin zone get | jq .metadata_heap
"default.rgw.meta"
In these old tickets I realized that this setting is obsolete, and
those objects are probably useless:
http://tracker.ceph.com/issues/17256
http://tra
one set --rgw-zone=default --infile=zone.json
and now I can safely remove the default.rgw.meta pool.
-- Dan
On Tue, Mar 12, 2019 at 3:17 PM Dan van der Ster wrote:
>
> Hi all,
>
> We have an S3 cluster with >10 million objects in default.rgw.meta.
>
> # radosgw-
ule is confusing the new
>>>> upmap cleaning.
>>>> (debug_mon 10 on the active mon should show those cleanups).
>>>>
>>>> I'm copying Xie Xingguo, and probably you should create a tracker for this.
>>>>
>>>> -- dan
>&
Hi all,
We've just hit our first OSD replacement on a host created with
`ceph-volume lvm batch` with mixed hdds+ssds.
The hdd /dev/sdq was prepared like this:
# ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes
Then /dev/sdq failed and was then zapped like this:
# ceph-volume lvm zap /dev/
On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster
> > wrote:
> > >
> > > Hi all,
> > >
> > > We've just hit ou
On Tue, Mar 19, 2019 at 1:05 PM Alfredo Deza wrote:
>
> On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote:
> >
> > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> > >
> > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> > > >
On Tue, Mar 19, 2019 at 12:25 PM Dan van der Ster wrote:
>
> On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote:
> >
> > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:
> > >
> > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster
> > > wrote:
On Tue, Mar 19, 2019 at 9:43 AM Erwin Bogaard wrote:
>
> Hi,
>
>
>
> For a number of application we use, there is a lot of file duplication. This
> wastes precious storage space, which I would like to avoid.
>
> When using a local disk, I can use a hard link to let all duplicate files
> point to
Hi all,
We're currently upgrading our cephfs (managed by OpenStack Manila)
clusters to Mimic, and want to start enabling snapshots of the file
shares.
There are different ways to approach this, and I hope someone can
share their experiences with:
1. Do you give users the 's' flag in their cap, so
On Thu, Mar 21, 2019 at 8:51 AM Gregory Farnum wrote:
>
> On Wed, Mar 20, 2019 at 6:06 PM Dan van der Ster wrote:
>>
>> On Tue, Mar 19, 2019 at 9:43 AM Erwin Bogaard
>> wrote:
>> >
>> > Hi,
>> >
>> >
>> >
>> > For
On Thu, Mar 21, 2019 at 1:50 PM Tom Barron wrote:
>
> On 20/03/19 16:33 +0100, Dan van der Ster wrote:
> >Hi all,
> >
> >We're currently upgrading our cephfs (managed by OpenStack Manila)
> >clusters to Mimic, and want to start enabling snapshots of the file
>
; --> ceph-volume lvm activate successful for osd ID: 3
> --> ceph-volume lvm create successful for: /dev/sda
>
Yes that's it! Worked for me too.
Thanks!
Dan
> This is a Nautilus test cluster, but I remember having this on a
> Luminous cluster, too. I hope this helps.
>
&
ccounted in their quota by CephFS :/
>
>
> Paul
> On Wed, Mar 20, 2019 at 4:34 PM Dan van der Ster
> wrote:
> >
> > Hi all,
> >
> > We're currently upgrading our cephfs (managed by OpenStack Manila)
> > clusters to Mimic, and want to start enabling sna
See http://tracker.ceph.com/issues/38849
As an immediate workaround you can increase `mds bal fragment size
max` to 20 (which will increase the max number of strays to 2
million.)
(Try injecting that option to the mds's -- I think it is read at runtime).
And you don't need to stop the mds's a
Hi all,
We have been benchmarking a hyperconverged cephfs cluster (kernel
clients + osd on same machines) for awhile. Over the weekend (for the
first time) we had one cephfs mount deadlock while some clients were
running ior.
All the ior processes are stuck in D state with this stack:
[] wait_on
https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Mon, Apr 1, 2019 at 12:45 PM Dan van der Ster wrote:
> >
> > Hi all,
> >
> > We have been benchmarking a hyperconverged cephfs cluster (k
d tends to get imbalanced again as soon as i
> need to replace disks.
>
> On Thu, Apr 4, 2019 at 10:49 AM Iain Buclaw wrote:
>>
>> On Mon, 18 Mar 2019 at 16:42, Dan van der Ster wrote:
>> >
>> > The balancer optimizes # PGs / crush weight. That host looks alr
Which OS are you using?
With CentOS we find that the heap is not always automatically
released. (You can check the heap freelist with `ceph tell osd.0 heap
stats`).
As a workaround we run this hourly:
ceph tell mon.* heap release
ceph tell osd.* heap release
ceph tell mds.* heap release
-- Dan
O
Hi all,
We have a slight issue while trying to migrate a pool from filestore
to bluestore.
This pool used to have 20 million objects in filestore -- it now has
50,000. During its life, the filestore pgs were internally split
several times, but never merged. Now the pg _head dirs have mostly
empty
; sleep 5 ; done
After running that for awhile the PG filestore structure has merged
down and now listing the pool and backfilling are back to normal.
Thanks!
Dan
On Tue, Apr 9, 2019 at 7:05 PM Dan van der Ster wrote:
>
> Hi all,
>
> We have a slight issue while trying to migrate
proposals will be available by mid-May.
All the Best,
Dan van der Ster
CERN IT Department
Ceph Governing Board, Academic Liaison
[1] Sept 16 is the day after CERN Open Days, where there will be
plenty to visit on our campus if you arrive a couple of days before
https://home.cern/news/news/cern/cern
On Mon, 22 Apr 2019, 22:20 Gregory Farnum, wrote:
> On Sat, Apr 20, 2019 at 9:29 AM Igor Podlesny wrote:
> >
> > I remember seeing reports in regards but it's being a while now.
> > Can anyone tell?
>
> No, this hasn't changed. It's unlikely it ever will; I think NFS
> resolved the issue but it
The upmap balancer in v12.2.12 works really well... Perfectly uniform on
our clusters.
.. Dan
On Tue, 30 Apr 2019, 19:22 Kenneth Van Alstyne,
wrote:
> Unfortunately it looks like he’s still on Luminous, but if upgrading is an
> option, the options are indeed significantly better. If I
On Tue, 30 Apr 2019, 19:32 Igor Podlesny, wrote:
> On Wed, 1 May 2019 at 00:24, Dan van der Ster wrote:
> >
> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
> our clusters.
> >
> > .. Dan
>
> mode upmap ?
>
yes, mgr balancer
Removing pools won't make a difference.
Read up to slide 22 here:
https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
..
Dan
(Apologies for terseness, I'm mobile)
On Tue, 30 Apr 2019, 20:02 Shain Miley, wrote:
> Here is the per
On Tue, Apr 30, 2019 at 8:26 PM Igor Podlesny wrote:
>
> On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
> >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform on
> >> > our clusters.
> >>
> >> mode upmap ?
> >
On Tue, Apr 30, 2019 at 9:01 PM Igor Podlesny wrote:
>
> On Wed, 1 May 2019 at 01:26, Igor Podlesny wrote:
> > On Wed, 1 May 2019 at 01:01, Dan van der Ster wrote:
> > >> > The upmap balancer in v12.2.12 works really well... Perfectly uniform
> > >> >
On Mon, Apr 1, 2019 at 1:46 PM Yan, Zheng wrote:
>
> On Mon, Apr 1, 2019 at 6:45 PM Dan van der Ster wrote:
> >
> > Hi all,
> >
> > We have been benchmarking a hyperconverged cephfs cluster (kernel
> > clients + osd on same machines) for awhile. Over the wee
y restarting
> the osd that it is reading from?
>
>
>
>
> -----Original Message-
> From: Dan van der Ster [mailto:d...@vanderster.com]
> Sent: donderdag 2 mei 2019 8:51
> To: Yan, Zheng
> Cc: ceph-users; pablo.llo...@cern.ch
> Subject: Re: [ceph-users] co-located
Presumably the 2 OSDs you marked as lost were hosting those incomplete PGs?
It would be useful to double confirm that: check with `ceph pg
query` and `ceph pg dump`.
(If so, this is why the ignore history les thing isn't helping; you
don't have the minimum 3 stripes up for those 3+1 PGs.)
If thos
On Tue, May 14, 2019 at 10:02 AM Kevin Flöh wrote:
>
> On 13.05.19 10:51 nachm., Lionel Bouton wrote:
> > Le 13/05/2019 à 16:20, Kevin Flöh a écrit :
> >> Dear ceph experts,
> >>
> >> [...] We have 4 nodes with 24 osds each and use 3+1 erasure coding. [...]
> >> Here is what happened: One osd daem
On Tue, May 14, 2019 at 10:59 AM Kevin Flöh wrote:
>
>
> On 14.05.19 10:08 vorm., Dan van der Ster wrote:
>
> On Tue, May 14, 2019 at 10:02 AM Kevin Flöh wrote:
>
> On 13.05.19 10:51 nachm., Lionel Bouton wrote:
>
> Le 13/05/2019 à 16:20, Kevin Flöh a écrit :
>
>
> "2(0)",
> "4(1)",
> "23(2)",
> "24(0)",
> "72(1)",
> "79(3)"
> ],
> "down_osds_w
On Wed, May 22, 2019 at 3:03 PM Rainer Krienke wrote:
>
> Hello,
>
> I created an erasure code profile named ecprofile-42 with the following
> parameters:
>
> $ ceph osd erasure-code-profile set ecprofile-42 plugin=jerasure k=4 m=2
>
> Next I created a new pool using the ec profile from above:
>
>
Did I understand correctly: you have a crush tree with both ssd and
hdd devices, and you want to direct PGs to the ssds, until they reach
some fullness threshold, and only then start directing PGs to the
hdds?
I can't think of a crush rule alone to achieve that. But something you
could do is add a
What's the full ceph status?
Normally recovery_wait just means that the relevant osd's are busy
recovering/backfilling another PG.
On Thu, May 23, 2019 at 10:53 AM Kevin Flöh wrote:
>
> Hi,
>
> we have set the PGs to recover and now they are stuck in
> active+recovery_wait+degraded and instructi
gt;
>io:
> client: 211KiB/s rd, 46.0KiB/s wr, 158op/s rd, 0op/s wr
>
> On 23.05.19 10:54 vorm., Dan van der Ster wrote:
> > What's the full ceph status?
> > Normally recovery_wait just means that the relevant osd's are busy
> > recovering/backfill
Hi Oliver,
We saw the same issue after upgrading to mimic.
IIRC we could make the max_bytes xattr visible by touching an empty
file in the dir (thereby updating the dir inode).
e.g. touch /cephfs/user/freyermu/.quota; rm /cephfs/user/freyermu/.quota
Does that work?
-- dan
On Mon, May 27, 2
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
wrote:
>
> Dear Dan,
>
> thanks for the quick reply!
>
> Am 27.05.19 um 11:44 schrieb Dan van der Ster:
> > Hi Oliver,
> >
> > We saw the same issue after upgrading to mimic.
> >
> > IIRC we could
Tuesday Sept 17 is indeed the correct day!
We had to move it by one day to get a bigger room... sorry for the confusion.
-- dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Reed and Brad,
Did you ever learn more about this problem?
We currently have a few inconsistencies arriving with the same env
(cephfs, v13.2.5) and symptoms.
PG Repair doesn't fix the inconsistency, nor does Brad's omap
workaround earlier in the thread.
In our case, we can fix by cp'ing the fi
Hi all,
Just a quick heads up, and maybe a check if anyone else is affected.
After upgrading our MDS's from v12.2.11 to v12.2.12, we started
getting crashes with
/builddir/build/BUILD/ceph-12.2.12/src/mds/MDSRank.cc: 1304:
FAILED assert(session->get_nref() == 1)
I opened a ticket here with
On Thu, Jun 6, 2019 at 8:00 PM Sage Weil wrote:
>
> Hello RBD users,
>
> Would you mind running this command on a random OSD on your RBD-oriented
> cluster?
>
> ceph-objectstore-tool \
> --data-path /var/lib/ceph/osd/ceph-NNN \
>
> '["meta",{"oid":"snapmapper","key":"","snapid":0,"hash":2758339
Hi,
This looks like a tunables issue.
What is the output of `ceph osd crush show-tunables `
-- Dan
On Fri, Jun 14, 2019 at 11:19 AM Luk wrote:
>
> Hello,
>
> Maybe somone was fighting with this kind of stuck in ceph already.
> This is production cluster, can't/don't want to make wrong s
Ahh I was thinking of chooseleaf_vary_r, which you already have.
So probably not related to tunables. What is your `ceph osd tree` ?
By the way, 12.2.9 has an unrelated bug (details
http://tracker.ceph.com/issues/36686)
AFAIU you will just need to update to v12.2.11 or v12.2.12 for that fix.
-- D
Nice to hear this was resolved in the end.
Coming back to the beginning -- is it clear to anyone what was the
root cause and how other users can avoid this from happening? Maybe
some better default configs to warn users earlier about too-large
omaps?
Cheers, Dan
On Thu, Jun 13, 2019 at 7:36 PM H
fore, but for some reasons we could
> not react quickly. We accepted the risk of the bucket becoming slow, but
> had not thought of further risks ...
>
> On 17.06.19 10:15, Dan van der Ster wrote:
> > Nice to hear this was resolved in the end.
> >
> > Coming bac
Hi all,
I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.
Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then I
o set
bluestore_compression_mode=force on the osd.
-- dan
[1] http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
On Thu, Jun 20, 2019 at 4:33 PM Dan van der Ster wrote:
>
> Hi all,
>
> I'm trying to compress an rbd pool via backfilling the existing data,
> and th
...)
Now I'll try to observe any performance impact of increased
min_blob_size... Do you recall if there were some benchmarks done to
pick those current defaults?
Thanks!
Dan
-- Dan
>
>
> Thanks,
>
> Igor
>
> On 6/20/2019 5:33 PM, Dan van der Ster wrote:
> > Hi
701 - 800 of 818 matches
Mail list logo