Hi all,
I was coming from mimic before, but I reverted the mds to 14.2.2 on
Friday and I didn't observe the issue since then.
Thanks!!
Kenneth
On 20/09/2019 03:43, Yan, Zheng wrote:
On Thu, Sep 19, 2019 at 11:37 PM Dan van der Ster wrote:
You were running v14.2.2 before?
It seems that th
Hi all,
When syncing data with rsync, I'm often getting blocked slow requests,
which also block access to this path.
2019-09-23 11:25:49.477 7f4f401e8700 0 log_channel(cluster) log [WRN]
: slow request 31.895478 seconds old, received at 2019-09-23
11:25:17.598152: client_request(client.3835
Dear Jason
A small update in the setup is that now the image syncing shows 8% and
remains still in the same status...after 1 day I can see the image got
replicated the other side
Answer few of my queries:
1.Does the image sync will work one by one after 1 image or all images get
sync at the s
And to come full circle,
After this whole saga, I now have a scrub error on the new device health
metrics pool/PG in what looks to be the exact same way.
So I am at a loss for what ever it is that I am doing incorrectly, as a scrub
error obviously makes the monitoring suite very happy.
> $ ceph
On Thu, Sep 19, 2019 at 2:36 AM Yoann Moulin wrote:
>
> Hello,
>
> I have a Ceph Nautilus Cluster 14.2.1 for cephfs only on 40x 1.8T SAS disk
> (no SSD) in 20 servers.
>
> > cluster:
> > id: 778234df-5784-4021-b983-0ee1814891be
> > health: HEALTH_WARN
> > 2 MDSs report s
On Thu, Sep 19, 2019 at 4:34 AM M Ranga Swami Reddy
wrote:
>
> Hi-Iam using ceph 12.2.11. here I am getting a few scrub errors. To fix these
> scrub error I ran the "ceph pg repair ".
> But scrub error not going and the repair is talking long time like 8-12 hours.
Depending on the size of the PG
Hi folks,
While practicing some disaster recovery I noticed that it currently seems
impossible to add both a v1 and v2 monitor to a monmap using monmaptool. Is
there any way to create a monmap manually to include both protocol versions to
a monmap? Currently on Ceph version 14.2.4
- Alberto C
On Mon, Sep 23, 2019 at 4:14 AM Kenneth Waegeman
wrote:
>
> Hi all,
>
> When syncing data with rsync, I'm often getting blocked slow requests,
> which also block access to this path.
>
> > 2019-09-23 11:25:49.477 7f4f401e8700 0 log_channel(cluster) log [WRN]
> > : slow request 31.895478 seconds o
Hi together,
for those reading along: We had to turn off all OSDs keeping our cephfs-data
pool during the intervention, luckily everything came back fine.
However, we managed to leave the MDS's and OSDs keeping the cephfs-metadata
pool and the MONs online. We restarted those sequentially afterw
Hi together,
the EU mirror still seems to be out-of-sync - does somebody on this list happen
to know whom to contact about this?
Or is this mirror unmaintained and we should switch to something else?
Going through the list of appropriate mirrors from
https://docs.ceph.com/docs/master/install/m
Hi,
I'll have a look at the status of se.ceph.com tomorrow morning, it's
maintained by us.
Kind Regards,
David
On mån, 2019-09-23 at 22:41 +0200, Oliver Freyermuth wrote:
> Hi together,
>
> the EU mirror still seems to be out-of-sync - does somebody on this
> list happen to know whom to conta
Hi David,
RedHat staff had transitioned the Mirror mailing list to a new domain +
self hosted instance of Mailman on this date:
Subject:[Ceph-mirrors] FYI: Mailing list domain change
Date: Mon, 17 Jun 2019 16:19:55 -0400
From: David Galloway
To: ceph-mirr...@lists.ceph.com
Dear Matthew,
On 2019-09-24 01:50, Matthew Taylor wrote:
Hi David,
RedHat staff had transitioned the Mirror mailing list to a new domain + self
hosted instance of Mailman on this date:
Subject:[Ceph-mirrors] FYI: Mailing list domain change
Date: Mon, 17 Jun 2019 16:19:55 -0400
From
On 9/17/19 11:01 PM, Oliver Freyermuth wrote:
> Dear Cephalopodians,
>
> I realized just now that:
> https://eu.ceph.com/rpm-nautilus/el7/x86_64/
> still holds only released up to 14.2.2, and nothing is to be seen of
> 14.2.3 or 14.2.4,
> while the main repository at:
> https://download.ceph
14 matches
Mail list logo