Hi,
did you wait for the backfill to complete before removing the old
drives? What is your environment? Are the affected PGs from an EC
pool? Does [1] apply to you?
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/035743.html
Zitat von huxia...@horebdata.cn:
Dear Ceph f
I am using replicated pool, and min_size=1. I do not have any disk failure, so
i do not expect incomplete PGs, but it appeared after OSD flaped.
huxia...@horebdata.cn
From: Eugen Block
Date: 2020-08-15 09:39
To: huxiaoyu
CC: ceph-users
Subject: Re: [ceph-users] how to handle incomplete PGs
Hi
Please provide more details about your environment, otherwise it's
just guessing what could have happened.
Zitat von huxia...@horebdata.cn:
I am using replicated pool, and min_size=1. I do not have any disk
failure, so i do not expect incomplete PGs, but it appeared after
OSD flaped.
I have fixed incomplete PGs in my environment. And i believe the situation was
cuased by OSD flaps during backfilling.
So i am asking for general guidelines on 1) how to avoid incomplete PGs as much
as possible, since it risks data loss; 2) is there a tool or script to
reliably fix it?
best
Dear Ceph folks,
Suppose i have extremely reliable OSDs, which almost never fails (of course an
imaginary ideal case). The OSDs may still go UPs or DOWNs due to network
issues, but i would like to make OSD never OUT and thus no need for
backfilling for OSD failure.
To achieve the above goal,
Hi,
I tryng add a VMware host in Ceph ISCSI. I followed exactly as is in guide.
But when add the iscsi gateway IP in "Dynamic Discovery", and "rescan
adapter", not loaded the "Paths". In vmkernel.log, I receive this messages:
2020-08-15T13:50:36.166Z cpu21:2103927)iscsi_vmk:
iscsivmk_ConnRxNotify
Did you tried to restart the sayed osds?
Hth
Mehmet
Am 12. August 2020 21:07:55 MESZ schrieb Martin Palma :
>> Are the OSDs online? Or do they refuse to boot?
>Yes. They are up and running and not marked as down or out of the
>cluster.
>
>> Can you list the data with ceph-objectstore-tool on thes
Hi all,
We just completed maintenance on an OSD node and we ran into an issue where all
data seemed to stop flowing while the node was down. We couldn't connect to any
of our VMs during that time. I was under the impression that by setting the
'noout' flag, you would not get the rebalance of t
Yeah, the VMs didn't die completely but they were all inaccessible during the
maintenance period. Once the maintenance node came back up, it started flowing
again.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-us
What are size and min_size for that pool?
Zitat von Matt Dunavant :
Yeah, the VMs didn't die completely but they were all inaccessible
during the maintenance period. Once the maintenance node came back
up, it started flowing again.
___
ceph-users
Do you mean I/O stopped on your VMs?
Sent from mobile
> Op 15 aug. 2020 om 17:48 heeft Matt Dunavant
> het volgende geschreven:
>
> Hi all,
>
> We just completed maintenance on an OSD node and we ran into an issue where
> all data seemed to stop flowing while the node was down. We couldn't
On 8/14/20 11:52 AM, Eugen Block wrote:
> Usually it should also accept the device path (although I haven't tried that
> in Octopus yet), you could try `ceph-volume lvm prepare --data
> /path/to/device` first and then activate it. If that doesn't work, try to
> create a vg and lv and try it w
Hi All,
Is there any way to see the list of files under buckets in ceph dashboard
for rados object storage. At present I can only see buckets details.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
Did you check the ceph status? ("ceph -s")
On 16/08/2020 1:47 am, Matt Dunavant wrote:
Hi all,
We just completed maintenance on an OSD node and we ran into an issue where all
data seemed to stop flowing while the node was down. We couldn't connect to any
of our VMs during that time. I was und
Yes, but that didn’t help. After some time they have blocked requests again
and remain inactive and incomplete.
On Sat, 15 Aug 2020 at 16:58, wrote:
> Did you tried to restart the sayed osds?
>
>
>
> Hth
>
> Mehmet
>
>
>
> Am 12. August 2020 21:07:55 MESZ schrieb Martin Palma :
>
> >> Are the OS
15 matches
Mail list logo