Hi all,
(I asked this on the Proxmox forums, but I think it may be more
appropriate here.)
In your practical experience, when I choose new hardware for a
cluster, is there any noticable difference between using SATA or SAS
drives. I know SAS drives can have a 12Gb/s interface and I think SATA
can
on really is how import is it for ceph to an
intelligent drive interface. From my limited understanding of this,
it seems that the whole design of ceph is so that this doesn't really
matter than much, unlike in a traditional RAID environment.
>
>
>
> On Sat, 21 Aug 2021, 19:47 Roland
0rpm while SAS ones goes
> > from 10k to 15k rpm which increases the number of iops.
> >
> > Sata 80 iops
> > Sas 10k 120iops
> > Sas 15k 180iops
> >
> > MBTF of SAS drives is also higher than SATA ones.
> >
> > What is your use case ? RGW ? Small o
I have a 7 node cluster which is complaining that:
root@s1:~# ceph -s
cluster:
id: a6092407-216f-41ff-bccb-9bed78587ac3
health: HEALTH_WARN
1 nearfull osd(s)
4 pool(s) nearfull
services:
mon: 3 daemons, quorum sm1,2,s5
mgr: s1(active), standbys: s5,
I need some help with this please. The command below gives and error
which is not helpful to me.
ceph-volume lvm migrate --osd-id 14 --osd-fsid
4de2a617-4452-420d-a99b-9e0cd6b2a99b --from db wal --target
NodeC-nvme1/NodeC-nvme-LV-RocksDB1
--> Source device list is empty
Unable to migrate to
On 2023/08/02 13:29, Roland Giesler wrote:
On 2023/08/02 12:53, Igor Fedotov wrote:
Roland,
First of all there are no block.db/block.wal symlinks in OSD folder.
Which means there are no standalone DB/WAL any more.
That is surprising. So ceph-volume is not able to extract the DB/WAL
from
Ouch, I got exited too quickly!
On 2023/08/02 21:27, Roland Giesler wrote:
# systemctl start ceph-osd@14
And, viola!, it did it.
# ls -la /var/lib/ceph/osd/ceph-14/block*
lrwxrwxrwx 1 ceph ceph 50 Dec 25 2022 /var/lib/ceph/osd/ceph-14/block
-> /dev/mapper/0GVWr9-dQ65-LHcx-y6fD-z7fI-1
We have a FreeBSD 12.3 guest machine that works well on an RBD volume
until it is live migrated to another node (on Proxmox). After
migration, the processes almost all go into D state (waiting for this
disk) and they don't exist from it (ie they don't "get" the disk the
requested.
I'm not su
I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
who are end of life. I have some spinners who have their journals on
SSD. Each spinner has a 50GB SSD LVM partition and I want to move those
each to new corresponding partitions.
The new 4TB SSD's I have split into volume
I created a new osd class and changed the class of an osd to the new one
without taking the osd out and stopping it first. The new class also
has a crush rule and a pool created for it.
When I realised my mistake, I reverted to what I had before. However, I
suspect that I now have a mess on t
On 2024/11/12 04:54, Alwin Antreich wrote:
Hi Roland,
On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote:
I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's
who are end of life. I have some spinners who have their journals on
SSD. Each spinner has a 50GB SSD LVM part
On 2024/11/17 15:20, Gregory Orange wrote:
On 17/11/24 19:44, Roland Giesler wrote:
I cannot see any option that allows me to disable mclock...
It's not so much disabling mclock as changing the op queue scheduler to
use wpq instead of it.
https://docs.ceph.com/en/reef/rados/configuratio
How do I determine the primary osd?
On 2024/11/14 16:12, Anthony D'Atri wrote:
You might also first try
ceph osd down 1701
This marks the OSD down in the map, it doesn’t restart anything, but it does
serve in some cases to goose progress. The OSD will quickly mark itself back
"num_write": 67,
"num_write_kb": 19088,
"num_scrub_errors": 0,
"num_shallow_scrub_errors": 0,
"num_deep_scrub_errors": 0,
&qu
On 2024/11/15 13:00, Gregory Orange wrote:
On 15/11/24 17:11, Roland Giesler wrote:
How do I determine the primary osd?
ceph pg map $pg
ceph pg $pg query | jq .info.stats.acting_primary
You can jq and less to take a look at other values which might be
informative too.
Ah, of course
that just stay like that. If I try
to out the this osd to stop it, the clsuter also never settles. So then
if I try to stop it in the GUI is tells me 73 pg's are still on the OSD...
Can I force those pg's away from the osd?
On Nov 13, 2024, at 12:48 PM, Roland Giesler wrote:
I cre
{
"id": 20,
"weight": 121916,
"pos": 9
},
{
"id": 42,
"weight": 228923,
"pos&qu
;: 121916,
"pos": 8
},
{
"id": 20,
"weight": 121916,
"pos": 9
},
{
"id": 42,
I had attached images, but these are not shown...
On 2024/11/14 10:12, Roland Giesler wrote:
On 2024/11/14 09:37, Eugen Block wrote:
Remapped PGs is exactly what to expect after removing (or adding) a
device class. Did you revert the change entirely? It sounds like you
maybe forgot to add the
achim
joachim.kraftma...@clyso.com
www.clyso.com
Hohenzollernstr. 27, 80801 Munich
Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
Roland Giesler schrieb am Do., 14. Nov. 2024, 05:40:
On 2024/11/13 21:05, Anthony D'Atri wrote:
I would think that there was some initial data mo
took the osd out of the cluster. If I try to stop the osd now however,
I see this:
Clearly these are more pg's than the 6 that are still backfilling.
Is there a way to force the pg's off this osd, so I safely stop it?
Zitat von Roland Giesler :
On 2024/11/13 21:05, Anthony D
_iops_ssd value
set.
Restarting the primary and/or adjusting osd_mclock_max_capacity_iops_ssd
value(s) could help in this situation.
Regards,
Frédéric.
[1] https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/
- Le 14 Nov 24, à 12:19, Roland Giesler rol...@giesler.za.net a éc
22 matches
Mail list logo