[ceph-users] Re: The confusing output of ceph df command

2020-09-11 Thread norman
Igor, I think I misunderstood the output of USED. The info should be allocated size, not equal 1.5*STORED sometimes. For example: when writing 4k file, It may allocate 64k that seems to use more spaces, but If you write another 4k, it can use the same blob.(I will validate the guess). So c

[ceph-users] Re: Issues with the ceph-bluestore-tool during cluster upgrade from Mimic to Nautilus

2020-09-11 Thread Jean-Philippe Méthot
Here’s the out file, as requested. Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada TEL : +1.514.802.1644 - Poste : 2644 FAX : +1.514.612.0678 CA/US : 1.855.774.4678 FR : 01 76

[ceph-users] Re: Is it possible to assign osd id numbers?

2020-09-11 Thread Anthony D'Atri
Now that’s a *very* different question from numbers assigned during an install. With recent releases instead of going down the full removal litany listed below, you can instead down/out the OSD and `destroy` it. That preserves the CRUSH bucket and OSD ID, then when you use ceph-disk, ceph-volu

[ceph-users] Re: OSDs and tmpfs

2020-09-11 Thread Oliver Freyermuth
Hi together, I believe the deciding factor is whether the OSD was deployed using ceph-disk (in "ceph-volume" speak, a "simple" OSD), which means the metadata will be on a separate partition, or whether it was deployed with "ceph-volume lvm". The latter stores the metadata in LVM tags, so the e

[ceph-users] Re: Is it possible to assign osd id numbers?

2020-09-11 Thread Shain Miley
Thank you for your answer below. I'm not looking to reuse them as much as I am trying to control what unused number is actually used. For example if I have 20 osds and 2 have failed...when I replace a disk in one server I don't want it to automatically use the next lowest number for the osd as

[ceph-users] Re: OSDs and tmpfs

2020-09-11 Thread Marc Roos
I have also these mounts with bluestore /dev/sde1 on /var/lib/ceph/osd/ceph-32 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdc1 on /var/lib/ceph/osd/ceph-6 type xfs (rw,relatime,attr2,inode64,noquota) /d

[ceph-users] Re: OSDs and tmpfs

2020-09-11 Thread Dimitri Savineau
> We have a 23 node cluster and normally when we add OSDs they end up > mounting like > this: > > /dev/sde1 3.7T 2.0T 1.8T 54% /var/lib/ceph/osd/ceph-15 > > /dev/sdj1 3.7T 2.0T 1.7T 55% /var/lib/ceph/osd/ceph-20 > > /dev/sdd1 3.7T 2.1T 1.6T 58% /var/li

[ceph-users] Issues with the ceph-bluestore-tool during cluster upgrade from Mimic to Nautilus

2020-09-11 Thread Jean-Philippe Méthot
Hi, We’re upgrading our cluster OSD node per OSD node to Nautilus from Mimic. From some release notes, it was recommended to run the following command to fix stats after an upgrade : ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0 However, running that command gives us the following

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-11 Thread Marc Roos
It is a hdd pool, all bluestore, configured with ceph-disk. Upgrades seem not to have 'updated' bluefs, some osds report like this: { "/dev/sdb2": { "osd_uuid": "xxx", "size": 4000681103360, "btime": "2019-01-08 13:45:59.488533", "description": "main"

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-11 Thread david
On 09/11 09:36, Marc Roos wrote: > > Hi David, > > Just to let you know, this hint is being set, what is the reason for > ceph of doing only half the objects? Can it be that there is some issue > with my osd's? Like some maybe have an old fs (still using disk not > volume)? Is this still to be

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-11 Thread Mark Nelson
On 9/11/20 4:15 AM, George Shuklin wrote: On 10/09/2020 19:37, Mark Nelson wrote: On 9/10/20 11:03 AM, George Shuklin wrote: ... Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or other 'iops-related' issues (brd has per

[ceph-users] Re: Issues with the ceph-bluestore-tool during cluster upgrade from Mimic to Nautilus

2020-09-11 Thread Igor Fedotov
Could you please run: CEPH_ARGS="--log-file log --debug-asok 5" ceph-bluestore-tool repair --path <...> ; cat log | grep asok > out and share 'out' file. Thanks, Igor On 9/11/2020 5:15 PM, Jean-Philippe Méthot wrote: Hi, We’re upgrading our cluster OSD node per OSD node to Nautilus from

[ceph-users] Re: Is it possible to assign osd id numbers?

2020-09-11 Thread George Shuklin
On 11/09/2020 16:11, Shain Miley wrote: Hello, I have been wondering for quite some time whether or not it is possible to influence the osd.id numbers that are assigned during an install. I have made an attempt to keep our osds in order over the last few years, but it is a losing battle witho

[ceph-users] Re: Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Igor Fedotov
Jan, please see inline On 9/11/2020 4:13 PM, Jan Pekař - Imatic wrote: Hi Igor, thank you, I also think that it is the problem you described. I recreated OSD's now and also noticed strange warnings - HEALTH_WARN Degraded data redundancy: 106763/723 objects degraded (14766.667%) Maybe the

[ceph-users] Re: Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Jan Pekař - Imatic
Hi Igor, thank you, I also think that it is the problem you described. I recreated OSD's now and also noticed strange warnings - HEALTH_WARN Degraded data redundancy: 106763/723 objects degraded (14766.667%) Maybe there are some "phantom", zero sized objects (OMAPs?), that cluster is recoveri

[ceph-users] Is it possible to assign osd id numbers?

2020-09-11 Thread Shain Miley
Hello, I have been wondering for quite some time whether or not it is possible to influence the osd.id numbers that are assigned during an install. I have made an attempt to keep our osds in order over the last few years, but it is a losing battle without having some control over the osd assign

[ceph-users] Re: Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Igor Fedotov
Hi Jan, most likely this is a known issue with slow and ineffective pool removal procedure in Ceph. I did some presentation on the topic at yesterday's weekly performance meeting, presumably a recording will be available in a couple of days. An additional accompanying issue not covered duri

[ceph-users] Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Jan Pekař - Imatic
Hi all, I have build testing cluster with 4 hosts, 1 SSD's  and 11 HDD on each host. Running ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable) on Ubuntu. Because we want to save small size object, I set bluestore_min_alloc_size 8192 (it is maybe important in thi

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-09-11 Thread Jan Fajerski
On Tue, Sep 08, 2020 at 07:14:16AM -, kle...@psi-net.si wrote: I found out that it's already possible to specify storage path in OSD service specification yaml. It works for data_devices, but unfortunately not for db_devices and wal_devices, at least not in my case. Aside from the questio

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-11 Thread George Shuklin
On 10/09/2020 19:37, Mark Nelson wrote: On 9/10/20 11:03 AM, George Shuklin wrote: ... Are there any knobs to tweak to see higher performance for ceph-osd? I'm pretty sure it's not any kind of leveling, GC or other 'iops-related' issues (brd has performance of two order of magnitude higher).

[ceph-users] Re: slow "rados ls"

2020-09-11 Thread Marcel Kuiper
Hi Stefan I can't recall that that was the case and unfortunately we do not have enough history for our performance measurements to look back We are on nautilus. Please let me know your findings when you do your pg expansion on nautilus Grtz Marcel > OK, I'm really curious if you observed the

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-11 Thread Marc Roos
Hi David, Just to let you know, this hint is being set, what is the reason for ceph of doing only half the objects? Can it be that there is some issue with my osd's? Like some maybe have an old fs (still using disk not volume)? Is this still to be expected or does ceph under pressure drop co