We were having an OSD reporting lots of errors, so I tried to remove
it by doing:
ceph orch osd rm 139 --zap
It started moving all the data. Eventually, we got to the point that
there's only 1 pg backfilling, but that seems to be stuck now. I think
it may be because, in the process, another OSD
I'm trying to resize a block device using "rbd --resize". The block device
is pretty huge (100+ TB). The resize has been running for over a week, and
I have no idea if it's actually doing anything, or if it's just hanging or
in some infinite loop. Is there any way of getting a progress report from
This may be too broad of a topic, or opening a can of worms, but we are
running a CEPH environment and I was wondering if there's any guidance
about this question:
Given that some group would like to store 50-100 TBs of data on CEPH and
use it from a linux environment, are there any advantages
I was wondering about performance differences between cephfs and rbd, so
I deviced this quick test. The results were pretty surprising to me.
The test: on a very idle machine, make 2 mounts. One is a cephfs mount,
the other an rbd mount. In each directory, copy a humongous .tgz file
(1.5 TB) a
> Please describe the client system.
> Are you using the same one for CephFS and RBD?
Yes
> Kernel version?
Centos 8 4.18.0-240.15.1.el8_3.x86_64 (but also tried on a Centos 7
machine, similar results)
> BM or VM?
Bare Metal
> KRBD or libvirt/librbd?
I'm assuming KRBD. I just did a simple
I'm trying to upgrade our 3-monitor cluster from Centos 7 and Nautilus to
Rocky 9 and Quincy. This has been a very slow process of upgrading one
thing, running the cluster for a while, then upgrading the next thing. I
first upgraded to the last Centos 7 and upgraded to Octopus. That worked
fine. Th
but I have gone to
Octopus (no problems) and now to Pacific (problems).
On Thu, Oct 26, 2023 at 3:36 PM Tyler Stachecki
wrote:
>
> On Thu, Oct 26, 2023, 6:16 PM Jorge Garcia wrote:
> >
> > from Centos 7 and Nautilus to
> > Rocky 9 and Quincy.
>
> I hate to b
ntos7+Octopus to Rocky8+Octopus to Rocky8+Pacific to Rocky9+Pacific to
Rocky9+Quincy
I was just hoping that I could skip the Rocky8 installation altogether...
On Thu, Oct 26, 2023 at 4:57 PM Tyler Stachecki
wrote:
> On Thu, Oct 26, 2023 at 6:52 PM Jorge Garcia wrote:
> >
> > H
l the monitors and all the managers to Pacific and Rocky 9. Now on to the
OSDs. Well, maybe next week...
On Thu, Oct 26, 2023 at 5:37 PM Tyler Stachecki
wrote:
> On Thu, Oct 26, 2023, 8:11 PM Jorge Garcia wrote:
>
>> Oh, I meant that "ceph -s" just hangs. I didn't even tr
We have a Nautilus cluster that just got hit by a bad power outage. When
the admin systems came back up, only the ceph-mgr process was running (all
the ceph-mon processes would not start). I tried following the instructions
in
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mo
OK, I'll try to give more details as I remember them.
1. There was a power outage and then power came back up.
2. When the systems came back up, I did a "ceph -s" and it never
returned. Further investigation revealed that the ceph-mon processes had
not started in any of the 3 monitors. I looke
change the MDS and RGW's
ability to start and stay running. Have you tried just restarting the
MDS / RGW daemons again?
Respectfully,
*Wes Dillingham*
w...@wesdillingham.com
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
On Thu, Sep 15, 2022 at 5:54 PM Jorge Garcia wrote:
I have been trying to recover our ceph cluster from a power outage. I
was able to recover most of the cluster using the data from the OSDs.
But the MDS maps were gone, and now I'm trying to recover that. I was
looking around and found a section in the Quincy manual titled
RECOVERING THE FILE SY
/51341
Also, here was the change which added the --recover flag:
https://tracker.ceph.com/issues/51716
There you can see the old process described again.
Good luck,
Dan
On Tue, Sep 20, 2022 at 9:00 PM Jorge Garcia wrote:
I have been trying to recover our ceph cluster from a power outage. I
was
We have a ceph cluster with a cephfs filesystem that we use mostly for
backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:
data:
pools: 3 pools, 4104 pgs
objects: 1.09 G objects, 944 TiB
usage: 1.5 PiB used, 1.0 PiB / 2.5 PiB avail
GLOBAL:
SI
I have been trying to install Nautilus using ceph-deploy, but somehow
keep getting mimic installed. I'm not sure where mimic is even coming
from... Here's what I'm doing (basically following the docs):
* Fresh install of Centos 7.7
sudo yum install -y
https://dl.fedoraproject.org/pub/epel/e
Hello,
I'm going down the long and winding road of upgrading our ceph
clusters from mimic to the latest version. This has involved slowly
going up one release at a time. I'm now going from octopus to pacific,
which also involves upgrading the OS on the host systems from Centos 7
to Rocky 9.
I fir
Actually, stupid mistake on my part. I had selinux mode as enforcing.
Changed it to disabled, and everything works again. Thanks for the
help!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lvaro Soto wrote:
>
> But why do you need to disable selinux for the service to work? You shouldn't
> have an issue.
>
>
> On Fri, Jan 10, 2025, 6:20 PM Jorge Garcia wrote:
>>
>> Actually, stupid mistake on my part. I had selinux mode as enforcing.
>> Changed
19 matches
Mail list logo