I noticed that in my scenario, when I mount cephfs via the kernel module,
it directly copies to one or three of the OSDs. And the writing speed of
the client is higher than the speed of replication and auto scaling This
causes the writing operation to stop as soon as those OSDs are filled, and
the
hi
i have a problem with ceph 17.2.6 , cephfs with mds daemons but see an
unusual behavior.
create a data pool with default crush rule but data just store in 3
specific osd and other osd is clean
PG auto-scaling is also active but its size does not change when the pool
is biger
I did this manua
Hi,
I don't think this is going to work. Each OSD belongs to a specific
host and you can't have multiple buckets (e.g. bucket type "host")
with the same name in the crush tree. But if I understand your
requirement correctly, there should be no need to do it this way. If
you structure your
Hi,
I don't quite understand the issue yet, maybe you can clarify.
If I perform a "change volume type" from OpenStack on volumes
attached to the VMs the system successfully migrates the volume from
the source pool to the destination pool and at the end of the
process the volume is visible
Hello,
This message does not concern Ceph itself but a hardware vulnerability which
can lead to permanent loss of data on a Ceph cluster equipped with the same
hardware in separate fault domains.
The DELL / Toshiba PX02SMF020, PX02SMF040, PX02SMF080 and PX02SMB160 SSD drives
of the 13G gene
Hello Weiwen,
Thank you for the response. I've attached the output for all PGs in state
incomplete and remapped+incomplete. Thank you!
Thanks,
Jayanth Reddy
On Sun, Jun 18, 2023 at 4:09 PM Jayanth Reddy
wrote:
> Hello Weiwen,
>
> Thank you for the response. I've attached the output for all PGs
Hello Weiwen,
Thank you for the response. I've attached the output for all PGs in state
incomplete and remapped+incomplete. Thank you!
Thanks,
Jayanth Reddy
On Sat, Jun 17, 2023 at 11:00 PM 胡 玮文 wrote:
> Hi Jayanth,
>
> Can you post the complete output of “ceph pg query”? So that we can
> und
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use of Clus
Hi, I have a real hardware cluster for testing available now. I'm not
sure whether I'm completely misunderstanding how it's supposed to work
or if it's a bug in the LRC plugin.
This cluster has 18 HDD nodes available across 3 rooms (or DCs), I
intend to use 15 nodes to be able to recover if o
Hi,
so grafana is starting successfully now? What did you change?
Regarding the container images, yes there are defaults in cephadm
which can be overridden with ceph config. Can you share this output?
ceph config dump | grep container_image
I tend to always use a specific image as describe
On 19.06.23 13:47, Work Ceph wrote:
Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use of Clustered Storage Volumes
(CSV), and we are facing a weird situation. In windows for instance, the
iSCSI block can be mounted, formatted
Windows Clustered Shared Volumes and Failover Clustering require the
support of clustered persistence reservations by the block device to
coordinate access by multiple hosts. The default iSCSI implementation in
Ceph does not support this, you can use the iSCSI implementation in
PetaSAN project:
The documentation very briefly explains a few core commands for restarting
things;
https://docs.ceph.com/en/quincy/cephadm/operations/#starting-and-stopping-daemons
but I feel I'm lacking quite some details of what is safe to do.
I have a system in production, clusters connected via CephFS and som
Hi,
Actually I've learned that it's not needed for a rule to start with a root
bucket, so I can heve rules that will only consider a subtree of my total
resources, and achieve what I was trying to do with the different disjunct
hierarchies.
BTW: it is possible to have different trees with dif
On Sat, Jun 17, 2023 at 1:11 PM Jayanth Reddy
wrote:
>
> Hello Folks,
>
> I've been experimenting with RGW encryption and found this out.
> Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit
> has to be end to end encrypted, however if there is a proxy, then [1] can
> be m
On Sat, Jun 17, 2023 at 8:37 AM Vahideh Alinouri
wrote:
>
> Dear Ceph Users,
>
> I am writing to request the backporting changes related to the
> AsioFrontend class and specifically regarding the header_limit value.
>
> In the Pacific release of Ceph, the header_limit value in the
> AsioFrontend c
Hi Eugen,
Thank you very much for these detailed tests that match what I observed
and reported earlier. I'm happy to see that we have the same
understanding of how it should work (based on the documentation). Is
there any other way that this list to enter in contact with the plugin
developers
Hi Patrick,
The event log size of 3/5 MDS is also very high, still. mds.1, mds.3,
and mds.4 report between 4 and 5 million events, mds.0 around 1.4
million and mds.2 between 0 and 200,000. The numbers have been constant
since my last MDS restart four days ago.
I ran your ceph-gather.sh script a
Hi,
adding the dev mailing list, hopefully someone there can chime in. But
apparently the LRC code hasn't been maintained for a few years
(https://github.com/ceph/ceph/tree/main/src/erasure-code/lrc). Let's
see...
Zitat von Michel Jouvin :
Hi Eugen,
Thank you very much for these detaile
I see, thanks for the feedback guys!
It is interesting that Ceph Manager does not allow us to export iSCSI
blocks without selecting 2 or more iSCSI portals. Therefore, we will always
use at least two, and as a consequence that feature is not going to be
supported. Can I export an RBD image via iSC
As a sidenote: there's the windows rbd driver which will get you wy
more performance. It's labeled beta, but it seems to work fine for a lot
of people. If you have a testlab you could try that.
Angelo.
On 19/06/2023 18:16, Work Ceph wrote:
I see, thanks for the feedback guys!
It is inter
Hello,
I'd like to know is there a way to query some metrics/logs in octopus (or if
has newer version I'm interested for the future too) about the bandwidth used
in the bucket for put/get operations?
Thank you
This message is confidential and is for the sole us
22 matches
Mail list logo