On 5/30/24 08:53, tpDev Tester wrote:
Can someone please point me to the docs how I can expand the capacity of
the pool without such problems.
Please show the output of
ceph status
ceph df
ceph osd df tree
ceph osd crush rule dump
ceph osd pool ls detail
Regards
--
Robert Sander
Heinlei
Hi,
I have been curiuos about ceph for long time and now I started to
experiment to find out, how it works. The idea I like most is, that ceph
can provide growing storage without the need to move from storage x to
storage y on consumer side.
I started with a 3 node cluster where each node g
On 5/27/24 09:28, s.dhivagar@gmail.com wrote:
We are using ceph octopus environment. For client can we use ceph quincy?
Yes.
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht
Hi,
I'm still unable to get our filesystem back.
I now have this:
fs_cluster - 0 clients
==
RANK STATE MDS ACTIVITY DNSINOS DIRS CAPS
0rejoin cephmd4b90.0k 89.4k 14.7k 0
1rejoin cephmd6b 105k 105k 21.3k 0
2failed
Hi Team,
We are using Ceph Octopus version with a total disk size of 136 TB, configured
with two replicas. Currently, our usage is 57 TB, and the available size is 5.3
TB. An incident occurred yesterday where around 3 TB of data was deleted
automatically. Upon analysis, we couldn't find the re
Please give some suggestion on this issue.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
The last release for EL7 is Octopus (version 15), you try to catch version 18
k
Sent from my iPhone
> On 29 May 2024, at 22:34, abdel.doui...@gmail.com wrote:
>
> The Ceph repository at https://download.ceph.com/ does not seem to have the
> librados2 package version 18.2.0 for RHEL 7. Th
Hi everyone,
I want to know about the scrubbing state placement group, my cluster has
too many pg in state scrubbing and it's increasing over time, maybe
scrubbing takes too long.
[image: image.png]
Cluster is not problem
I'm using reef version
root@n1s1:~# ceph health detail
HEALTH_OK
I want to
Hello,
Please help: CEPH cluster using docker.
using below command for the bootstrap with provided key and pub
cephadm -v bootstrap --mon-ip --allow-overwrite --ssh-private-key
id_ed25519 --ssh-public-key id_ed25519.pub
able to ssh directly with id_ed25519 key.
RuntimeError: Failed command
> You also have the metadata pools used by RGW that ideally need to be on NVME.
The OP seems to intend shared NVMe for WAL+DB, so that the omaps are on NVMe
that way.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
> How do you know if its safe to set `require-min-compat-client=ree
f` if you have kernel clients?
At the moment it is not enforced when it comes to accepting connections
from clients. There is undergoing discussion what was intended and
what the contract in librados finally became.
Regardless of
I've had rbd mirror deployed for over a year copying images from Site A to Site
B for DR.
After introducing an additional node into the cluster a few weeks back and
creating OSD's on this node (no mon/mgr) we've started to see snapshots
throwing an error 'librbd::mirror::snapshot::CreatePrimary
With RBD Mirroring configured from Site A to Site B, we typically see 2
snapshots on the source side (current and 1 previous) and 1 snapshot on the
destination side (most recent snapshot from source) - this has been the case
for over a year now.
When testing a DR scenario (i.e. demoting the pri
The Ceph repository at https://download.ceph.com/ does not seem to have the
librados2 package version 18.2.0 for RHEL 7. The directory
https://download.ceph.com/rpm-18.2.0/el7/ is empty, and the specific
package URL
https://download.ceph.com/rpm-18.2.0/el7/x86_64/librados2-18.2.0-0.el7.x86_64.
We are using ceph octopus environment. For client can we use ceph quincy?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
So a few questions I have around this.
What is the network you have for this cluster?
Changing the bluestone_min_alloc_size would be the last thing I would even
consider. In fact I wouldn’t be changing it as you are in untested territory.
The challenge with making these sort of things perform i
Hi all,
I'm running a quincy 17.2.5 ceph cluster, with 3 nodes that each have 3 disks
with replica size 3 min_size 2, my cluster was running fine before, and
suddenly it can't read and write because ceph slowops on all osd. I tried
restarting osd, then degraded and misplaced data appeared, and
Hi,
we have a stretched cluster (Reef 18.2.1) with 5 nodes (2 nodes on each side +
witness). You can se our daemon placement below.
[admin]
ceph-admin01 labels="['_admin', 'mon', 'mgr']"
[nodes]
[DC1]
ceph-node01 labels="['mon', 'mgr', 'mds', 'osd']"
ceph-node02 labels="['mon', 'rgw', 'mds', 'o
I am running two Ceph clusters on 18.2.2. On both clusters, I initially
configured rgw using the following config from a yaml file:
service_type: rgw
service_id: s3.jvm.gh79
placement:
label: rgw
count_per_host: 1
spec:
rgw_realm: jvm
rgw_zone: gh79
ssl: true
rgw_frontend_
Hi everyone.
I had the very same problem, and I believe I've figured out what is happening.
Many admins advise using "ufw limit ssh" to help protect your system against
brute-force password guessing. Well, "ceph orch host add" makes multiple ssh
connections very quickly and triggers the ufw l
hi stefan ... i did the next step and need your help.
my idea was to stretch the cluster without stretch mode. so we decided
to reserve a size of 4 on each side.
the setup is the same as stretched mode, also crush rule, location,
election_strategy and tie breaker.
only "ceph mon enable_stretc
Hi,
I have a small cluster with 11 osds and 4 filesystems. Each server
(Debian 11, ceph 17.2.7) usually run several services.
After troubles with a host with OSD:s I removed the OSD:s and let the
cluster repair it self (x3 replica). After a while it returned to a
healthy state and everything
Hi folks,
I am currently in the process of building a Multisite Cluster (v18.2.2) with
two Ceph clusters (cluster1 and cluster2). Each Cluster is its own Zone and has
its own zonegroup. The idea is to sync metadata but not data. The user should
select the zonegroup (region) via the location con
Hello! We're waiting brand new minor 18.2.3 due to
https://github.com/ceph/ceph/pull/56004. Why? Timing in our work is a tough
thing. Could you kindly share an estimation of 18.2.3 release timeframe? It is
16 days passed from original tag creation so I want to understand when it will
be release
>
> Each simultaneously. These are SATA3 with 128mb cache drives.
Turn off the cache.
> The bus is 6 gb/s. I expect usage to be in the 90+% range not the 50% range.
"usage" as measured how?
>
> On Mon, May 27, 2024 at 5:37 PM Anthony D'Atri wrote:
>
>>
>>
>>
>> hdd iops on the thre
Each simultaneously. These are SATA3 with 128mb cache drives. The bus is
6 gb/s. I expect usage to be in the 90+% range not the 50% range.
On Mon, May 27, 2024 at 5:37 PM Anthony D'Atri wrote:
>
>
>
> hdd iops on the three discs hover around 80 +/- 5.
>
>
> Each or total? I wouldn’t expect
How do you know if its safe to set `require-min-compat-client=reef` if you have
kernel clients?
Thanks,
Kevin
From: Laura Flores
Sent: Wednesday, May 29, 2024 8:12 AM
To: ceph-users; dev; clt
Cc: Radoslaw Zarzynski; Yuri Weinstein
Subject: [ceph-users] B
Dear Ceph Users,
We have discovered a bug with the pg-upmap-primary interface (related to
the offline read balancer [1]) that affects all Reef releases.
In all Reef versions, users are required to set
`require-min-compat-client=reef` in order to use the pg-upmap-primary
interface to prevent pre-r
On Wed, 29 May 2024, Eugen Block wrote:
> I'm not really sure either, what about this?
>
> ceph mds repaired
I think it works only for 'damaged' MDs.
N.
> The docs state:
>
> >Mark the file system rank as repaired. Unlike the name suggests, this command
> >does not change a MDS; it manipula
Hello,
Please help: CEPH cluster using docker.
using below command for the bootstrap with provided key and pub
cephadm -v bootstrap --mon-ip --allow-overwrite --ssh-private-key
id_ed25519 --ssh-public-key id_ed25519.pub
able to ssh directly with id_ed25519 key.
RuntimeError: Failed command
I'm not really sure either, what about this?
ceph mds repaired
The docs state:
Mark the file system rank as repaired. Unlike the name suggests,
this command does not change a MDS; it manipulates the file system
rank which has been marked damaged.
Maybe that could bring it back up? Did yo
Hi,
after our desaster yesterday, it seems that we got our MONs back.
One of the filesystems, however, seems in a strange state:
% ceph fs status
fs_cluster - 782 clients
==
RANK STATE MDSACTIVITY DNSINOS DIRS CAPS
0active cephmd6a Reqs:
32 matches
Mail list logo