Hello guys,
We are running Ceph Octopus on Ubuntu 18.04. We noticed that some OSDs are
using more than 16GiB of RAM. However, the option "osd_memory_target" is
set to 4GiB. The OSDs are SSDs and have 2TiB in size each.
Have you guys seen such behavior?
Are we missing some other configuration or p
Hello guys,
We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of
IO utilization for bstore_kv_sync thread during processes such as adding a
new pool and increasing/reducing the number of PGs in a pool.
It is funny though that the IO utilization (reported with IOTOP) is 99.99%
e seeing time spent waiting on fdatsync in
> bstore_kv_sync if the drives you are using don't have power loss
> protection and can't perform flushes quickly. Some consumer grade
> drives are actually slower at this than HDDs.
>
>
> Mark
>
>
> On 2/22/24 11:04, W
Hello guys!
We noticed an unexpected situation. In a recently deployed Ceph cluster we
are seeing a raw usage, that is a bit odd. We have the following setup:
We have a new cluster with 5 nodes with the following setup:
- 128 GB of RAM
- 2 cpus Intel(R) Intel Xeon Silver 4210R
- 1 NVM
32 0 B0 0 B 0115 TiB
rbd 6 32 0 B0 0 B 0115 TiB
```
On Mon, Apr 3, 2023 at 10:25 PM Work Ceph
wrote:
> Hello guys!
>
>
> We noticed an unexpected situation. In a recently deployed Ceph cluster we
> are see
r OSD?
> >
> > If so then highly likely RAW usage is that high due to DB volumes
> > space is considered as in-use one already.
> >
> > Could you please share "ceph osd df tree" output to prove that?
> >
> >
> > Thanks,
> >
> > Igor
his be a trick?
>
> If not - please share "ceph osd df tree" output?
>
>
> On 4/4/2023 2:18 PM, Work Ceph wrote:
>
> Thank you guys for your replies. The "used space" there is exactly that.
> It is the accounting for Rocks.DB and WAL.
> ```
>
>
Hello guys,
We have been reading the docs, and trying to reproduce that process in our
Ceph cluster. However, we always receive the following message:
```
librbd::Migration: prepare: image has watchers - not migrating
rbd: preparing migration failed: (16) Device or resource busy
```
We test
te, the clients can be restarted using the new
> > target image name. Attempting to restart the clients using the
> > source image name will result in failure.
>
> So I don't think you can live-migrate without interruption, at least
> not at the moment.
>
> Reg
Hello guys!
Is it possible to restrict user access to a single image in an RBD pool? I
know that I can use namespaces, so users can only see images with a given
namespaces. However, these users will still be able to create new RBD
images.
Is it possible to somehow block users from creating RBD im
Hello guys,
We have a doubt regarding snapshot management, when a protected snapshot is
created, should it be deleted when its RBD image is removed from the system?
If not, how can we list orphaned snapshots in a pool?
___
ceph-users mailing list -- ceph
tect snap-spec
> Unprotect a snapshot from deletion (undo snap protect). If
> cloned children remain, snap unprotect fails. (Note that clones may
> exist in different pools than the parent snapshot.)
>
> Regards
>
> Reto
>
>
> Am Mi., 10. Mai 2023 um 20
Hello guys,
What would happen if we set up an RBD mirroring configuration, and in the
target system (the system where the RBD image is mirrored) we create
snapshots of this image? Would that cause some problems?
Also, what happens if we delete the source RBD image? Would that trigger a
deletion in
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
Recently, we had the need to add some VMWare clusters as clients for the
iSCSI GW and also Windows systems with the use of Clus
performance
> implementation. We currently use Ceph 17.2.5
>
>
> On 19/06/2023 14:47, Work Ceph wrote:
> > Hello guys,
> >
> > We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
> > for some workloads, RadosGW (via S3) for others, and iSCSI for
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.
We started noticing some unexpected performance issues with iSCSI. I mean,
an SSD pool is reaching 100MB of write speed for an
can make a distinct difference.
>
>
>
> On Jun 23, 2023, at 09:33, Work Ceph
> wrote:
>
> Great question!
>
> Yes, one of the slowness was detected in a Veeam setup. Have you
> experienced that before?
>
> On Fri, Jun 23, 2023 at 10:32 AM Anthony D'Atri
&g
Thanks for the help so far guys!
Has anybody used (made it work) the default ceph-iscsi implementation with
VMware and/or Windows CSV storage system with a single target/portal in
iSCSI?
On Wed, Jun 21, 2023 at 6:02 AM Maged Mokhtar wrote:
>
> On 20/06/2023 01:16, Work Ceph wrote:
>
Thank you all guys that tried to help here. We discovered the issue, and it
had nothing to do with Ceph or iSCSI GW.
The issue was being caused by a Switch that was acting as the "router" for
the network of the iSCSI GW. All end clients (applications) were separated
into different VLANs, and netwo
, 2023 at 12:31 PM Work Ceph
wrote:
> Thanks for the help so far guys!
>
> Has anybody used (made it work) the default ceph-iscsi implementation with
> VMware and/or Windows CSV storage system with a single target/portal in
> iSCSI?
>
> On Wed, Jun 21, 2023 at 6:02 AM M
Hello guys,
We are facing/seeing an unexpected mark in one of our pools. Do you guys
know what does "removed_snaps_queue" it mean? We see some notation such as
"d5~3" after this tag. What does it mean? We tried to look into the docs,
but could not find anything meaningful.
We are running Ceph Octo
lts
> I posted in this list [1].
>
> Regards,
> Eugen
>
> [1]
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ZEMGKBLMEREBZB7SWOLDA6QZX3S7FLL3/#YAHVTTES6YU5IXZJ2UNXKURXSHM5HDEX
>
> Zitat von Work Ceph :
>
> > Hello guys,
> > We are facing
Thanks for the prompt reply.
Yes, it does. All of them are up, with the correct class that is used by
the crush algorithm.
On Thu, Feb 13, 2025 at 7:47 AM Marc wrote:
> > Hello guys,
> > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> > pool that consumes all OSDs of a
Hello guys,
Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
pool that consumes all OSDs of all nodes. After adding another host, I
noticed that no extra space was added. Can this be a result of the number
of PGs I am using?
I mean, when adding more hosts/OSDs, should I alwa
Thanks for the feedback!
Yes, the Heath_ok is there.]
The OSD status show all of them as "exists,up".
The interesting part is that "ceph df" shows the correct values in the "RAW
STORAGE" section. However, for the SSD pool I have, it shows only the
previous value as the max usable value.
I had 38
Yes, the bucket that represents the new host is under the ROOT bucket as
the others. Also, the OSDs are in the right/expected bucket.
I am guessing that the problem is the number of PGs. I have 120 OSDs across
all hosts, and I guess that 512 PGS, which is what the pool is using, is
not enough. I d
Yes, everything has finished converging already.
On Thu, Feb 13, 2025 at 12:33 PM Janne Johansson
wrote:
> Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
> :
> > Thanks for the feedback!
> > Yes, the Heath_ok is there.]
> > The OSD status show all of them as "exists
s is the
> only substantial pool on the cluster. When you do `ceph osd df`, if this
> is the only substantial pool, the PGS column at right would average around
> 12 or 13 which is sper low.
>
> On Feb 13, 2025, at 11:40 AM, Work Ceph
> wrote:
>
> Yes, the bucket that
28 matches
Mail list logo