Which, as a user, is very surprising to me too..
--
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
- Original Message -
From: Wido den Hollander (w...@42on.com)
Date: 16-11-2018 08:25
To: Mark Schouten (m...@tuxis.nl
On 11/15/18 7:51 PM, koukou73gr wrote:
> Are there any means to notify the administrator that an auto-repair has
> taken place?
I don't think so. You'll see the cluster go to HEALTH_ERR for a while
before it turns to HEALTH_OK again after the PG has been repaired.
You would have to search the c
On 11/15/18 7:45 PM, Mark Schouten wrote:
> As a user, I’m very surprised that this isn’t a default setting.
>
That is because you can also have FileStore OSDs in a cluster on which
such a auto-repair is not safe.
Wido
> Mark Schouten
>
>> Op 15 nov. 2018 om 18:40 heeft Wido den Hollander
Ok, thx, I'll try ceph-disk.
От: Alfredo Deza
Отправлено: 15 ноября 2018 г. 20:16
Кому: Klimenko, Roman
Копия: ceph-users@lists.ceph.com
Тема: Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty
On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Rom
Exactly. But write operations should go to all nodes.
v
On Wed, Nov 14, 2018 at 9:52 PM Konstantin Shalygin wrote:
> On 11/15/18 9:31 AM, Vlad Kopylov wrote:
> > Thanks Konstantin, I already tried accessing it in different ways and
> > best I got is bulk renamed files and other non presentable
On Thu, Nov 15, 2018 at 2:30 PM 赵赵贺东 wrote:
>
> I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to
> 300MB, performance seems not change obviously.
> But at the beginning , librbd works in better performance in 12 osds cluster.
> So it seems meaning less for me.
> In
try to restart osd.29, then use pg repair. If this doesn't work or it
appear again after a while, scan your HDD which used for osd.29, maybe
there is bad sector of your disks, just replace the disk with new one.
On Thu, Nov 15, 2018 at 5:00 PM Marc Roos wrote:
>
> Forgot, these are bluestore o
Hi,
[apropos auto-repair for scrub settings]
On 15/11/2018 18:45, Mark Schouten wrote:
As a user, I’m very surprised that this isn’t a default setting.
We've been to cowardly to do it so far; even on a large cluster the
occasional ceph pg repair hasn't taken up too much admin time, and the
Are there any means to notify the administrator that an auto-repair has
taken place?
-K.
On 2018-11-15 20:45, Mark Schouten wrote:
As a user, I’m very surprised that this isn’t a default setting.
Mark Schouten
Op 15 nov. 2018 om 18:40 heeft Wido den Hollander het volgende
geschreven:
Hi
As a user, I’m very surprised that this isn’t a default setting.
Mark Schouten
> Op 15 nov. 2018 om 18:40 heeft Wido den Hollander het
> volgende geschreven:
>
> Hi,
>
> This question is actually still outstanding. Is there any good reason to
> keep auto repair for scrub errors disabled with
Hi,
This question is actually still outstanding. Is there any good reason to
keep auto repair for scrub errors disabled with BlueStore?
I couldn't think of a reason when using size=3 and min_size=2, so just
wondering.
Thanks!
Wido
On 8/24/18 8:55 AM, Wido den Hollander wrote:
> Hi,
>
> osd_sc
On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Roman wrote:
>
> Hi everyone!
>
> As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
> https://tracker.ceph.com/issues/23496
>
> So, I can't follow this instruction
> http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
>
> Do
Hi,
I’m trying to setup one-way rbd-mirroring for a ceph-cluster used by an
openstack cloud, but the rbd-mirror is unable to “catch up” with the
changes. However it appears to me as if it's not due to the ceph-clusters
or the network but due to the server running the rbd-mirror process running
out
Hi everyone!
As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
https://tracker.ceph.com/issues/23496
So, I can't follow this instruction
http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) t
Thanks Jeff for taking the trouble to respond and your willingness to help
Here are some questions:
- Apparently rados_cluster is gone in 2.8. There is "fs" and 'fs_ng' now
However, I was not able to find a config depicting usage
Would you be able to share your working one ?
- how would on
Hi,
Recently we've seen multiple messages on the mailinglists about people
seeing HEALTH_WARN due to large OMAP objects on their cluster. This is
due to the fact that starting with 12.2.6 OSDs warn about this.
I've got multiple people asking me the same questions and I've done some
digging around
Hi,
We're trying to test rbd on a small CEPH running on VM: 8 OSD, 3 mon+mgr using
rbd bench on 2 rbd from 2 pools with different replication setting:
For pool 4copy:
---
rule 4copy_rule {
id 1
type replicated
min_size 2
max_size 10
I now had the time to test and after installing this package, uploads to
rbd are working perfectly.
Thank you very much fur sharing this!
Kevin
Am Mi., 7. Nov. 2018 um 15:36 Uhr schrieb Kevin Olbrich :
> Am Mi., 7. Nov. 2018 um 07:40 Uhr schrieb Nicolas Huillard <
> nhuill...@dolomede.fr>:
>
>>
On 11/15/18 4:37 AM, Gregory Farnum wrote:
> This is weird. Can you capture the pg query for one of them and narrow
> down in which epoch it “lost” the previous replica and see if there’s
> any evidence of why?
So I checked it further and dug deeper into the logs and found this on
osd.1982:
201
I test in 12 osds cluster, change objecter_inflight_op_bytes from 100MB to
300MB, performance seems not change obviously.
But at the beginning , librbd works in better performance in 12 osds cluster.
So it seems meaning less for me.
In a small cluster(12 osds), 4m seq write performance for L
Forgot, these are bluestore osds
-Original Message-
From: Marc Roos
Sent: donderdag 15 november 2018 9:59
To: ceph-users
Subject: [ceph-users] pg 17.36 is active+clean+inconsistent head
expected clone 1 missing?
I thought I will give it another try, asking again here since there i
I thought I will give it another try, asking again here since there is
another thread current. I am having this error since a year or so.
This I of course already tried:
ceph pg deep-scrub 17.36
ceph pg repair 17.36
[@c01 ~]# rados list-inconsistent-obj 17.36
{"epoch":24363,"inconsistents":[
Quoting Stefan Kooman (ste...@bit.nl):
> Quoting Patrick Donnelly (pdonn...@redhat.com):
> > Thanks for the detailed notes. It looks like the MDS is stuck
> > somewhere it's not even outputting any log messages. If possible, it'd
> > be helpful to get a coredump (e.g. by sending SIGQUIT to the MDS)
23 matches
Mail list logo