[ceph-users] Process for adding a separate block.db to an osd

2020-09-18 Thread tri
Hey all, I'm trying to figure out the appropriate process for adding a separate SSD block.db to an existing OSD. From what I gather the two steps are: 1. Use ceph-bluestore-tool bluefs-bdev-new-db to add the new db device 2. Migrate the data ceph-bluestore-tool bluefs-bdev-migrate I followed

[ceph-users] Re: Process for adding a separate block.db to an osd

2020-09-18 Thread Eugen Block
Don’t forget to change the lv tags and make sure ceph-bluestore-tool show-label has the right labels. This has been discussed multiple times [1]. [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GSFUUIMYDPSFM2HHO25TCTPLTXBS3O2K/ Zitat von t...@postix.net: Hey all, I

[ceph-users] Using cephadm shell/ceph-volume

2020-09-18 Thread tri
Hi all, I'm having problem creating an osd using ceph-volume (by the way of cephadm). This is on an octopus installation with cephadm. So I use "cephadm shell" and then "ceph-volume" but got the following error: root@furry:/var/lib# ceph-volume lvm prepare --data /dev/sda --block.db /dev/vg/sd

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-18 Thread Frank Schilder
Dear Michael, maybe there is a way to restore access for users and solve the issues later. Someone else with a lost/unfound object was able to move the affected file (or directory containing the file) to a separate location and restore the now missing data from backup. This will "park" the prob

[ceph-users] RuntimeError: Unable check if OSD id exists

2020-09-18 Thread Marc Roos
I have still ceph-disk created osd's in nautilus. Thought about using this ceph-volume, but looks like this manual for replacing ceph-disk[1] is not complete. Getting already this error RuntimeError: Unable check if OSD id exists: [1] https://docs.ceph.com/en/latest/rados/operations/add-or-r

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-18 Thread Daniel Poelzleithner
On 2020-09-17 19:21, vita...@yourcmc.ru wrote: > It does, RGW really needs SSDs for bucket indexes. CephFS also needs SSDs for > metadata in any setup that's used by more than 1 user :). Nah. I crashed my first cephfs with my music library, a 2 TB git annex repo, just me alone (slow ops on mds).

[ceph-users] Re: Using cephadm shell/ceph-volume

2020-09-18 Thread Eugen Block
Use the block.db option without the device path but only {VG}/{LV}: ceph-volume lvm prepare --data /dev/sda --block.db vg/sda.db --dmcrypt Zitat von t...@postix.net: Hi all, I'm having problem creating an osd using ceph-volume (by the way of cephadm). This is on an octopus installation with

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-18 Thread Frank Schilder
Dear Michael, > I disagree with the statement that trying to recover health by deleting > data is a contradiction. In some cases (such as mine), the data in ceph > is backed up in another location (eg tape library). Restoring a few > files from tape is a simple and cheap operation that takes a m

[ceph-users] Ceph RDMA GID Selection Problem

2020-09-18 Thread Lazuardi Nasution
Hi, I have something weird about GID selection for Ceph with RDMA. When I do configuration with ms_async_rdma_device_name and ms_async_rdma_gid_idx, Ceph with RDMA running successfully. But, when I do configuration with ms_async_rdma_device_name, ms_async_rdma_local_gid and ms_async_rdma_roce_ver,

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-18 Thread Maged Mokhtar
dm-writecache works using a high and low watermarks, set at 45 and 50%. All writes land in cache, once cache fills to the high watermark backfilling to the slow device starts and stops when reaching the low watermark. Backfilling uses b-tree with LRU blocks and tries merge blocks to reduce h

[ceph-users] It is an advantage to use assignment help in Australia

2020-09-18 Thread john seena
Do you want to connect with Australian assignment helpers to feel the low stress of submission? Are you seeking the best source of help for composing your academic documents? In this context, use Assignment Help Services and get completed papers without any delay. As I can understand that, whene

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-18 Thread George Shuklin
On 17/09/2020 17:37, Mark Nelson wrote: Does fio handle S3 objects spread across many buckets well? I think bucket listing performance was maybe missing too, but It's been a while since I looked at fio's S3 support.  Maybe they have those use cases covered now.  I wrote a go based benchmark cal

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-09-18 Thread Lomayani S. Laizer
Hello Jason, I confirm this release fixes the crashes. there is no a single crash for past 4 days On Mon, Sep 14, 2020 at 2:55 PM Jason Dillaman wrote: > On Mon, Sep 14, 2020 at 5:13 AM Lomayani S. Laizer > wrote: > > > > Hello, > > Last week i got time to try debug crashes of these vms > >

[ceph-users] September Ceph Science User Group Virtual Meeting

2020-09-18 Thread Kevin Hrpcek
Hey all, We will be having a Ceph science/research/big cluster call on Wednesday September 23rd. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members mos

[ceph-users] disk scheduler for SSD

2020-09-18 Thread George Shuklin
I start to wonder (again) which scheduler is better for ceph on SSD. My reasoning. None: 1. Reduces latency for requests. The lower latency is, the higher is perceived performance for unbounded workload with fixed queue depth (hello, benchmarks). 2. Causes possible spikes in latency for reque

[ceph-users] Problem with manual deep-scrubbing PGs on EC pools

2020-09-18 Thread Osiński Piotr
Hi, We have a little problem with deep-scrubbing on PGs on EC pool. [root@mon-1 ~]# ceph health detail HEALTH_WARN 1 pgs not deep-scrubbed in time PG_NOT_DEEP_SCRUBBED 1 pgs not deep-scrubbed in time pg 14.d4 not deep-scrubbed since 2020-09-05 20:26:02.696191 [root@mon-1 ~]# ceph pg deep-sc

[ceph-users] RGW multisite replication doesn't start

2020-09-18 Thread Eugen Block
Hi *, I have 2 virtual one-node-clusters configured for multisite RGW. In the beginning the replication actually worked for some hundred MB or so, and then it stopped. In the meantime I wiped both RGWs twice to make sure the configuration is right (including wiping all pools clean). I do

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-18 Thread Michael Thomas
Hi Frank, On 9/18/20 2:50 AM, Frank Schilder wrote: Dear Michael, firstly, I'm a bit confused why you started deleting data. The objects were unfound, but still there. That's a small issue. Now the data might be gone and that's a real issue. Interval: Anyone rea

[ceph-users] Re: multiple OSD crash, unfound objects

2020-09-18 Thread Frank Schilder
Dear Michael, firstly, I'm a bit confused why you started deleting data. The objects were unfound, but still there. That's a small issue. Now the data might be gone and that's a real issue. Interval: Anyone reading this: I have seen many threads where ceph admins s

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-18 Thread vitalif
> we did test dm-cache, bcache and dm-writecache, we found the later to be > much better. Did you set bcache block size to 4096 during your tests? Without this setting it's slow because 99.9% SSDs don't handle 512 byte overwrites well. Otherwise I don't think bcache should be worse than dm-write

[ceph-users] Re: Spanning OSDs over two drives

2020-09-18 Thread Robert Sander
Hi, Am 18.09.20 um 03:53 schrieb Liam MacKenzie: > As I understand that using RAID isn't recommended, how would I best deploy my > cluster so it's smart enough to group drives according to the trays that > they're in? You could treat both disks as one and do a RAID0 over them with one OSD on i

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-18 Thread huxia...@horebdata.cn
Dear Maged, Do you mean dm-writecache is better than B-cache in terms of small IO performance. By how much? Could you please share us a bit more details? thanks in advance, Samuel huxia...@horebdata.cn From: Maged Mokhtar Date: 2020-09-18 02:12 To: ceph-users Subject: [ceph-users] Re: Be