Hi Greg,
As a follow up, we see items similar to this pop up in the
objecter_requests (when it's not empty). Not sure if reading it right, but
some appear quite large (in the MB range?):
{
"ops": [
{
"tid": 9532804,
"pg": "3.f9c235d7",
"osd": 2,
It's nice to hear I'm on the right track.
Thanks for the answers.
Anthony D'Atri , 8 Ara 2021 Çar, 12:13
tarihinde şunu yazdı:
>
> I’ve had good success with this strategy, have the mons chime each other, and
> perhaps have OSD / other nodes against the mons too.
> Chrony >> ntpd
> With modern i
Hi,
i like to answer to myself :-) I finally found the rest of my documentation...
So after reinstalling the OS also the osd config must be created.
Here is what i have done, maybe this helps someone:
--
Get the informations:
```
cephadm ceph-volume lvm list
ceph config genera
On Fri, Dec 10, 2021 at 01:12:56AM +0100, Roman Steinhart wrote:
> hi,
>
> recently I had to switch the other way around (from podman to docker).
> I just...
> - stopped all daemons on a host with "systemctl stop ceph-{uuid}@*"
> - purged podman
> - triggered a redeploy for every daemon with "ceph
Hi,
We are experimenting with using manually created crush maps to pick one SSD
as primary and and two HDD devices. Since all our HDDs have the DB & WAL on
NVMe drives, this gives us a nice combination of pretty good write
performance, and great read performance while keeping costs manageable for
So I did an export of the PG using ceph-objectstore-tool in hopes that I
could push ceph to forget about the rest of the data there. It was a
successful export but we’ll see how it goes importing it. I tried on one
osd already to import but got the message the PG already exists, am I doing
somethin
Robert, Roman and Weiwen Hu,
Thank you very much for your responses. I presume one host at a time, and
the redeploy will take care of any configuration, with nothing further
being necessary?
Thank you.
Marco
On Fri, Dec 10, 2021 at 7:36 AM 胡玮文 wrote:
> On Fri, Dec 10, 2021 at 01:12:56AM +010
Forgot to confirm, was this process non-destructive in terms of data in
OSDs?
Thanks again,
On Fri, Dec 10, 2021 at 9:23 AM Marco Pizzolo
wrote:
> Robert, Roman and Weiwen Hu,
>
> Thank you very much for your responses. I presume one host at a time, and
> the redeploy will take care of any conf
Hello,
As part of a migration process where we will be swinging Ceph hosts from
one cluster to another we need to reduce the size from 3 to 2 in order to
shrink the footprint sufficiently to allow safe removal of an OSD/Mon node.
The cluster has about 500M objects as per dashboard, and is about 1
Hi,
I also see this behaviour and can more or less reproduce it running
rsync or Bareos Backup tasks (anything stat-intense should do) on a
specific directory. Unmounting and then remounting the filesystem fixes
it, until it is caused again by a stat-intense task.
For me, I only saw two imme
Dear Ceph experts,
I encounter a use case wherein the size of a single file may go beyound 50TB,
and would like to know whether CephFS can support a single file with size over
50TB? Furthermore, if multiple clients, say 50, want to access (read/modify)
this big file, do we expect any performanc
Hi all,
Has the CfP deadline for Cephalcoon 2022 been extended to 19 December 2022?
Please confirm if anyone knows it...
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
It appears to have been, and we have an application that's pending an
internal review before we can submit... so we're hopeful that it has
been!
On 2021-12-10 15:21, Bobby wrote:
Hi all,
Has the CfP deadline for Cephalcoon 2022 been extended to 19 December
2022? Please confirm if anyone kno
one typing mistakeI meant 19 December 2021
On Fri, Dec 10, 2021 at 8:21 PM Bobby wrote:
>
> Hi all,
>
> Has the CfP deadline for Cephalcoon 2022 been extended to 19 December
> 2022? Please confirm if anyone knows it...
>
>
> Thanks
>
___
ceph-users
I would avoid doing this. Size 2 is not where you want to be. Maybe you can
give more details about your cluster size and shape and what you are trying
to accomplish and another solution could be proposed. The contents of "ceph
osd tree " and "ceph df" would help.
Respectfully,
*Wes Dillingham*
w
On Sat, Dec 11, 2021 at 2:21 AM huxia...@horebdata.cn
wrote:
>
> Dear Ceph experts,
>
> I encounter a use case wherein the size of a single file may go beyound 50TB,
> and would like to know whether CephFS can support a single file with size
> over 50TB? Furthermore, if multiple clients, say 50,
16 matches
Mail list logo