[ceph-users] Re: Cheap M.2 2280 SSD for Ceph

2021-11-16 Thread Varun Priolkar
Thanks. I'll try to source these, little hard to find. Regards, Varun Priolkar On Mon, 15 Nov, 2021, 10:57 am Ján Senko, wrote: > In my experience there is only one good model in size of 2280 for Ceph, > and that is Micron 7300 MAX > They go up to 800GB in size. 7400 MAX just came out recently

[ceph-users] Re: Cheap M.2 2280 SSD for Ceph

2021-11-16 Thread Varun Priolkar
Unfortunately the Seagate drives aren't 2280. That is the max size that fits my build. Regards, Varun Priolkar On Mon, 15 Nov, 2021, 10:17 am Mario Giammarco, wrote: > You can use also consumer drives considering that is an homelab. > Otherwise try to find seagate nytro xm1441 or xm1440. > Mar

[ceph-users] Re: Cheap M.2 2280 SSD for Ceph

2021-11-16 Thread Varun Priolkar
Thanks. I think I will buy 1 to benchmark and buy the rest if it works well. Regards, Varun Priolkar On Mon, 15 Nov, 2021, 8:57 am Eneko Lacunza, wrote: > Hi Varun, > > That Kingston DC grade model should work (well enough at least for a home > lab), it has PLP. Note I haven't used that model

[ceph-users] pg inactive+remapped

2021-11-16 Thread Joffrey
Hi, I don't understand why my Global Recovery Event never finish... I have 3 hosts, all osd and hosts are up. My pools are replica*3 # ceph status cluster: id: 0a77af8a-414c-11ec-908a-005056b4f234 health: HEALTH_WARN Reduced data availability: 1 pg inactive D

[ceph-users] Re: Fwd: pg inactive+remapped

2021-11-16 Thread Joffrey
# ceph pg 4.a3 query { "snap_trimq": "[]", "snap_trimq_len": 0, "state": "activating+undersized+degraded+remapped", "epoch": 640, "up": [ 7, 0, 8 ], "acting": [ 7, 8 ], "async_recovery_targets": [ "0" ], "ac

[ceph-users] Re: mons fail as soon as I attempt to mount

2021-11-16 Thread 胡 玮文
Hi Jeremy. Since you say the mons fail, could you share the logs of the failing mons? It is hard to diagnostic with little information. 发件人: Jeremy Hansen 发送时间: 2021年11月16日 19:27 收件人: ceph-users 主题: [ceph-users] Re: mons fail as soon as I attem

[ceph-users] Re: Fwd: pg inactive+remapped

2021-11-16 Thread 胡 玮文
> But my log file for osd 7 is empty Is it deployed by cephadm? If so, you can try “sudo cephadm logs --name osd.7”, which is a wrapper around journalctl. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...

[ceph-users] Re: Fwd: pg inactive+remapped

2021-11-16 Thread Joffrey
Ok, restart my OSD 0 fix the problem ! Thanks you Le mar. 16 nov. 2021 à 13:32, Stefan Kooman a écrit : > On 11/16/21 13:17, Joffrey wrote: > > > "peer_info": [ > > { > > "peer": "0", > > "pgid": "4.a3", > > "last_update": "373'783", > >

[ceph-users] Re: OSDs get killed by OOM when other host goes down

2021-11-16 Thread Mark Nelson
Yeah, if it's not memory reported by the mempools, that means it's something we aren't tracking.  Perhaps temporary allocations in some dark corner of the code, or possibly rocksdb (though 38GB of ram is obviously excessive).  heap stats are a good idea.  it's possible if neither the heap stats

[ceph-users] cephadm / ceph orch : indefinite hang adding hosts to new cluster

2021-11-16 Thread Lincoln Bryant
Greetings list, We have a new Ceph cluster we are trying to deploy on EL8 (CentOS Stream) using cephadm (+podman), targeting Pacific. We are successfully able to bootstrap the first host, but attempting to add any additional hosts hangs indefinitely. We have confirmed that we are able to SSH f

[ceph-users] how to list ceph file size on ubuntu 20.04

2021-11-16 Thread zxcs
Hi, I want to list cephfs directory size on ubuntu 20.04, but when I use ls -alh [directory] ,it shows the number of files and directorys under this directory(it only count number not size) , i remember when i use ls -alh [directory] on ubuntu 16.04, it will shows the size of this directory (i

[ceph-users] Re: how to list ceph file size on ubuntu 20.04

2021-11-16 Thread 胡 玮文
There is a rbytes mount option [1]. Besides, you can use “getfattr -n ceph.dir.rbytes /path/in/cephfs” [1]: https://docs.ceph.com/en/latest/man/8/mount.ceph/#advanced Weiwen Hu 在 2021年11月17日,10:26,zxcs 写道: Hi, I want to list cephfs directory size on ubuntu 20.04, but when I use ls -alh [di

[ceph-users] Re: OSDs get killed by OOM when other host goes down

2021-11-16 Thread Marius Leustean
I did the heap release. It did not free the memory right away, but after I restarted the OSD, it started with "just" 10GB RAM. After ~12h the container reports again 30+ GB RAM. To mention that I lowered pg_log values too: osd_min_pg_log_entries = 100 osd_max_pg_log_entries = 500 osd_target_pg_log

[ceph-users] Re: Adding a RGW realm to a single cephadm-managed ceph cluster

2021-11-16 Thread Eugen Block
Hi, I retried it with your steps, they worked for me. The first non-default realm runs on the default port 80 on two RGWs, the second realm is on the same hosts on port 8081 as configured in the spec file. The period commit ran successfully, though, so maybe there's something wrong with you

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-11-16 Thread Peter Lieven
> Am 09.11.2021 um 00:01 schrieb Igor Fedotov : > > Hi folks, > > having a LTS release cycle could be a great topic for upcoming "Ceph User + > Dev Monthly meeting". > > The first one is scheduled on November 18, 2021, 14:00-15:00 UTC > > https://pad.ceph.com/p/ceph-user-dev-monthly-minute