[ceph-users] Re: [RGW] Strange issue of multipart object

2024-06-18 Thread Konstantin Shalygin
Xin chao, The Pacific latest (16.2.15) have multiple multipart issue fixes (for example [1]), I suggest to upgrade your release for start k [1] https://tracker.ceph.com/issues/56673 Sent from my iPhone > On 18 Jun 2024, at 10:32, Huy Nguyen wrote: > > Hi, > I'm using Ceph v16.2.13. Using `

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-11 Thread Konstantin Shalygin
> On 11 Jul 2024, at 15:20, John Mulligan wrote: > > I'll ask to have backport PRs get generated. I'm personally pretty clueless > as > to how to process backports. The how-to described in this doc [1] > Thanks, I hadn't found that one. Added backport for squid release [2], as far as I unde

[ceph-users] Re: Unable to mount with 18.2.2

2024-07-17 Thread Konstantin Shalygin
Hi, > On 17 Jul 2024, at 10:21, Frédéric Nass > wrote: > > Seems like msgr v2 activation did only occur after all 3 MONs were redeployed > and used RocksDB. Not sure why this happened though. For a work with msgr v2 only, you need to specify ms_mode to prefer-crc, at least. For example of fst

[ceph-users] Re: [Ceph-announce] v18.2.4 Reef released

2024-07-24 Thread Konstantin Shalygin
Hi, > On 25 Jul 2024, at 00:12, Yuri Weinstein wrote: > > We're happy to announce the 4th release in the Reef series. The repo of reef now returns 404 > GET /rpm-reef/el8/SRPMS/ HTTP/1.1 > Host: download.ceph.com > < HTTP/1.1 404 Not Found If change the repo to previous version, the answer 20

[ceph-users] Re: Reef 18.2.4 EL8 packages ?

2024-07-25 Thread Konstantin Shalygin
Hi, > On 25 Jul 2024, at 14:39, Noe P. wrote: > > I find this 18.2.4 release highly confusing. > > Can anyone please confirm that EL8 packages are no more supported ? > The small remark in the release notes > > qa/distros: remove centos 8 from supported distros (pr#57932, Guillaume > Abrioux

[ceph-users] Re: Reef 18.2.4 EL8 packages ?

2024-07-26 Thread Konstantin Shalygin
Hi, > On 26 Jul 2024, at 20:22, Josh Durgin wrote: > > We didn't want to stop building on Centos 8, but the way it went end of > life and stopped doing any security updates forced our hand. See this > thread for details [0]. > > Essentially this made even building and testing with Centos 8 infe

[ceph-users] Re: ceph 18.2.4 on el8?

2024-07-29 Thread Konstantin Shalygin
Hi, > On 30 Jul 2024, at 00:51, Christopher Durham wrote: > > I see that 18.2.4 is out, in rpm for el9 at: > http://download.ceph.com/rpm-18.2.4/ Are there any plans for an '8' version? > One of my clusters is not yet ready to update to Rocky 9. We will update to 9 > moving forward but this i

[ceph-users] Re: ceph 18.2.4 on el8?

2024-07-31 Thread Konstantin Shalygin
Hi, > On 30 Jul 2024, at 00:51, Christopher Durham > wrote: > > I see that 18.2.4 is out, in rpm for el9 at: > http://download.ceph.com/rpm-18.2.4/ Are there any plans for an '8' version? > One of my clusters is not yet ready to update to Rocky 9. We will update to 9

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-07 Thread Konstantin Shalygin
Hi, > On 7 Aug 2024, at 10:31, Nicola Mori wrote: > > Unfortunately I'm on bare metal, with very old hardware so I cannot do much. > I'd try to build a Ceph image based on Rocky Linux 8 if I could get the > Dockerfile of the current image to start with, but I've not been able to find > it. Ca

[ceph-users] Re: Connecting A Client To 2 Different Ceph Clusters

2024-08-24 Thread Konstantin Shalygin
Hi, > On 24 Aug 2024, at 10:57, duluxoz wrote: > > How do I set up the ceph.conf file(s) on my clients so that I can use > /etc/fstab to connect to each CephFS or each Ceph Cluster? You can try to setup your fstab like this: 10.0.0.1:3300,10.0.0.2:3300,10.0.0.3:3300:/folder1 /mnt ceph name=

[ceph-users] Re: Connecting A Client To 2 Different Ceph Clusters

2024-08-24 Thread Konstantin Shalygin
Hi, I don't think kernel read any files. Maybe only with some userland helper. In any case, it's up to you, but for the kernel ceph client to work, nothing but the kernel is needed, bindings in the form of files and user space utilities only complicate the system, like for me k Sent from my i

[ceph-users] ceph-mgr perf throttle-msgr - what is caused fails?

2024-09-06 Thread Konstantin Shalygin
Hi, seems something in mgr is throttle due val > max. I'm right? root@mon1# ceph daemon /var/run/ceph/ceph-mgr.mon1.asok perf dump | jq '."throttle-msgr_dispatch_throttler-mgr-0x55930f4aed20"' { "val": 104856554, "max": 104857600, "get_started": 0, "get": 9700833, "get_sum": 6544522184

[ceph-users] Re: ceph-mgr perf throttle-msgr - what is caused fails?

2024-09-08 Thread Konstantin Shalygin
gt;> "max": 104857600, > > So it probably doesn't have any visible impact, does it? But the values are > not that far apart, maybe they burst sometime, leading to the fail_fail > counter to increase? Do you have that monitored? > > Thanks, > Eugen > &g

[ceph-users] Re: ceph-mgr perf throttle-msgr - what is caused fails?

2024-09-13 Thread Konstantin Shalygin
As I said before, currently Prometheus module performance degradation is only one _visible_ issues. I named things like is as indicator (of feature problem's) k Sent from my iPhone > On 12 Sep 2024, at 23:18, Eugen Block wrote: > > But did you notice any actual issues or did you just see tha

[ceph-users] Re: ceph-mgr perf throttle-msgr - what is caused fails?

2024-09-14 Thread Konstantin Shalygin
Hi, Increasing this value to 30 is the only thing I could do at the moment k Sent from my iPhone > On 13 Sep 2024, at 16:49, Eugen Block wrote: > > I remember having a prometheus issue quite some time ago, it couldn't handle > 30 nodes or something, not really a big cluster. But we needed to

[ceph-users] Re: Bucket sharding

2020-10-14 Thread Konstantin Shalygin
On 09.10.2020 15:44, Szabo, Istvan (Agoda) wrote: I have a bucket which is close to 10 millions objects (9.1 millions), we have: rgw_dynamic_resharding = false rgw_override_bucket_index_max_shards = 100 rgw_max_objs_per_shard = 10 Do I need to increase the numbers soon or it is not possible

[ceph-users] Re: Unable to clarify error using vfs_ceph (Samba gateway for CephFS)

2020-11-18 Thread Konstantin Shalygin
On 12.11.2020 02:40, Matt Larson wrote: ceph:user_id = samba.upload ceph:config_file = /etc/ceph/ceph.conf I have a file at /etc/ceph/ceph.conf including: fsid = redacted mon_host = redacted auth_cluster_required = cephx auth_service_required = cephx auth_client_required = ceph

[ceph-users] Re: Weird ceph use case, is there any unknown bucket limitation?

2020-11-18 Thread Konstantin Shalygin
It's okay for Ceph, just prepare NVMe pool for metadata. k Sent from my iPhone > On 18 Nov 2020, at 11:29, Szabo, Istvan (Agoda) > wrote: > > Hi, > > I have a use case where the user would like to have 5 Buckets. > Is it normal for ceph just too much for me? > > > The reason they wan

[ceph-users] Re: EC overwrite

2020-11-19 Thread Konstantin Shalygin
I don't think it's may be a problem. But it also useless. k Sent from my iPhone > On 18 Nov 2020, at 07:06, Szabo, Istvan (Agoda) > wrote: > > Is it s problem if ec_overwrite enabled in the data pool? > https://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwr

[ceph-users] Re: Cephfs snapshots and previous version

2020-11-24 Thread Konstantin Shalygin
Just rpmbuild last samba version with enabled vfs features. This modules is stable. k Sent from my iPhone > On 24 Nov 2020, at 10:51, Frank Schilder wrote: > > We made the same observation and found out that for CentOS8 there are extra > modules for samba that provide vfs modules for certa

[ceph-users] Re: 14. 2.15: Question to collection_list_legacy osd bug fixed in 14.2.15

2020-11-24 Thread Konstantin Shalygin
This bug may be affect when you upgrade from 14.2.11 to 14.2.(12|14) with low speed (e.g. one by one node). If you already upgrade from 14.2.11 you just jump over this bug. k Sent from my iPhone > On 24 Nov 2020, at 10:43, Rainer Krienke wrote: > > Hello, > > I am running a productive cep

[ceph-users] Re: Manual bucket resharding problem

2020-11-24 Thread Konstantin Shalygin
Try to look at `radosgw-admin reshard stale-instances list` command. And if list is not empty just rm this stale reshard and then start reshard process again. k Sent from my iPhone > On 22 Nov 2020, at 15:35, Mateusz Skała wrote: > > Thank You for response, how I can upload this to metadat

[ceph-users] Re: 14. 2.15: Question to collection_list_legacy osd bug fixed in 14.2.15

2020-11-24 Thread Konstantin Shalygin
> Have a nice day > Rainer > >> Am 24.11.20 um 11:18 schrieb Konstantin Shalygin: >> This bug may be affect when you upgrade from 14.2.11 to 14.2.(12|14) with >> low speed (e.g. one by one node). If you already upgrade from 14.2.11 you >> just jump over this bug.

[ceph-users] Re: Manual bucket resharding problem

2020-11-30 Thread Konstantin Shalygin
ta on Sunday, as I write before. Someone have done > this on production? > Regards > Mateusz Skała > >> Wiadomość napisana przez Konstantin Shalygin w dniu >> 24.11.2020, o godz. 11:35: >> >> Try to look at `radosgw-admin reshard stale-instances list` command.

[ceph-users] Re: Determine effective min_alloc_size for a specific OSD

2020-12-02 Thread Konstantin Shalygin
Bluestore alloc size is fixed in config and used only at bluestore OSD creation. You can change it on conf and then recreate your OSD. k On 02.12.2020 14:51, 胡 玮文 wrote: I remember that min_alloc_size cannot be changed after OSD creation, but I can’t find the source now. Searching results in

[ceph-users] Re: How to copy an OSD from one failing disk to another one

2020-12-08 Thread Konstantin Shalygin
Destroy this OSD, replace disk, deploy OSD. k Sent from my iPhone > On 8 Dec 2020, at 15:13, huxia...@horebdata.cn wrote: > > Hi, dear cephers, > > On one ceph i have a failing disk, whose SMART information signals an > impending failure but still availble for reads and writes. I am setting

[ceph-users] Re: krbd cache quesitions

2020-12-23 Thread Konstantin Shalygin
krbd or /ceph_fs_mount_point already use one of the best cache - kernel page cache k On 23.12.2020 14:08, huxia...@horebdata.cn wrote: Dear ceph folks, rbd_cache can be set up as a read /write cache for librbd, widely used with openstack cinder. Does krbd has a silmilar cache controll mecha

[ceph-users] Re: Which version of Ceph fully supports CephFS Snapshot?

2021-01-12 Thread Konstantin Shalygin
Nautilus will be a good solution for this k Sent from my iPhone > On 11 Jan 2021, at 06:25, fantastic2085 wrote: > > I would like to use the Cephfs Snapshot feature, which version of Ceph > supports it > ___ > ceph-users mailing list -- ceph-user

[ceph-users] Re: [Query] Safe to discard bucket lock objects in reshard pool?

2021-01-19 Thread Konstantin Shalygin
Just update your luminous to 12.2.13, user radosgw admin reshard stale instances rm command, leave dynamic reshard feature enabled. In this release reshrding bugs was fixed. Cheers, k Sent from my iPhone > On 19 Jan 2021, at 18:58, Prasad Krishnan > wrote: > > We have a slightly dated vers

[ceph-users] Re: Running ceph cluster on different os

2021-01-25 Thread Konstantin Shalygin
Of course, Ceph original mission is Independence of distros and hardware. Just match your packages versions Cheers, k Sent from my iPhone > On 26 Jan 2021, at 08:15, Szabo, Istvan (Agoda) > wrote: > > Is there anybody running a cluster with different os? > Due to the centos 8 change I might

[ceph-users] Re: PG inconsistent with empty inconsistent objects

2021-01-27 Thread Konstantin Shalygin
Paste your `ceph versions` please k Sent from my iPhone > On 27 Jan 2021, at 03:07, Richard Bade wrote: > > Ceph v14.2.13 by the way. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPHFS - MDS gracefull handover of rank 0

2021-01-27 Thread Konstantin Shalygin
Martin, also before restart - issue cache drop command to active mds k Sent from my iPhone > On 27 Jan 2021, at 11:58, Dan van der Ster wrote: > > In our experience failovers are largely transparent if the mds has: > >mds session blacklist on timeout = false >mds session blacklist on

[ceph-users] Re: XFS block size on RBD / EC vs space amplification

2021-02-03 Thread Konstantin Shalygin
Actually, with last Igor patches default min alloc size for hdd is 4K k Sent from my iPhone > On 2 Feb 2021, at 13:12, Gilles Mocellin > wrote: > > Hello, > > As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only using HDDs), > in certain conditions, especially with erasure codi

[ceph-users] Re: krbd crc

2021-02-10 Thread Konstantin Shalygin
Msgr2 will be supported from kernel 5.11 k Sent from my iPhone > On 11 Feb 2021, at 03:35, Seena Fallah wrote: > > Does it support msgr v2? (If not which kernel supports msgr v2?) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send

[ceph-users] Re: adding a second rgw instance on the same zone

2021-02-13 Thread Konstantin Shalygin
Just start new rgw’s with the same configs on another ip/port, everything will be fine Cheers, k > On 11 Feb 2021, at 20:01, Adrian Nicolae wrote: > > I have a Mimic cluster with only one RGW machine. My setup is simple - one > realm, one zonegroup, one zone. How can I safely add a second

[ceph-users] Re: RGW/Swift 404 error when listing/deleting a newly created empty bucket

2021-02-17 Thread Konstantin Shalygin
Mike, do you create ticket for this issue, especially with logs and reproducer? k Sent from my iPhone > On 17 Feb 2021, at 20:54, Mike Cave wrote: > > I am bumping this email to hopefully get some more eyes on it. > > We are continuing to have this problem. Unfortunately the cluster is very

[ceph-users] Re: Ceph nvme timeout and then aborting

2021-02-19 Thread Konstantin Shalygin
Please paste your `name smart-log /dev/nvme0n1` output k > On 19 Feb 2021, at 12:53, zxcs wrote: > > I have one ceph cluster with nautilus 14.2.10 and one node has 3 SSD and 4 > HDD each. > Also has two nvmes as cache. (Means nvme0n1 cache for 0-2 SSD and Nvme1n1 > cache for 3-7 HDD) >

[ceph-users] Re: Ceph nvme timeout and then aborting

2021-02-19 Thread Konstantin Shalygin
Look's good, what is your hardware? Server model & NVM'es? k > On 19 Feb 2021, at 13:22, zxcs wrote: > > BTW, actually i have two nodes has same issues, and another error node's nvme > output as below > > Smart Log for NVME device:nvme0n1 namespace-id: > critical_warning

[ceph-users] Re: Storing 20 billions of immutable objects in Ceph, 75% <16KB

2021-02-23 Thread Konstantin Shalygin
OMAP with keys works as database-like replication, new keys/updates comes to acting set as data stream, not a full object k Sent from my iPhone > On 22 Feb 2021, at 17:13, Benoît Knecht wrote: > > Is recovery faster for OMAP compared to the equivalent number of RADOS > objects? ___

[ceph-users] Re: Question on multi-site

2021-02-23 Thread Konstantin Shalygin
Replication works on osd layer, rgw is a http frontend for objects. If you write some object via librados directly, rgw will not be awared about this k Sent from my iPhone > On 22 Feb 2021, at 18:52, Cary FitzHugh wrote: > > Question is - do files which are written directly to an OSD get rep

[ceph-users] Re: List number of buckets owned per user

2021-02-24 Thread Konstantin Shalygin
Or you can achieve users from bucket usage, consult with code of radosgw_usage_exporter [1] maybe if enough to just start exporter and work with data in Grafana Cheers, k [1] https://github.com/blemmenes/radosgw_usage_exporter > On 24 Feb 2021, at 16:08, Marcelo wrote: > > I'm trying to lis

[ceph-users] Re: Openstack rbd image Error deleting problem

2021-03-09 Thread Konstantin Shalygin
> On 10 Mar 2021, at 09:50, Norman.Kern wrote: > > I have used Ceph rbd for Openstack for sometime, I met a problem while > destroying a VM. The Openstack tried to > > delete rbd image but failed. I have a test deleting a image by rbd command, > it costs lots of time(image size 512G or more)

[ceph-users] Re: RadosGW unable to start resharding

2021-03-10 Thread Konstantin Shalygin
Try to look at: radosgw-admin reshard stale-instances list Then: radosgw-admin reshard stale-instances rm k > On 10 Mar 2021, at 12:11, Ansgar Jazdzewski > wrote: > > We are running ceph 14.2.16 and I like to reshard a bucket because I > have a large object warning! > > so I did: > radosgw

[ceph-users] Re: Openstack rbd image Error deleting problem

2021-03-11 Thread Konstantin Shalygin
You can enable object-map feature online and rebuild it. This will be helping for deleting objects. k Sent from my iPhone > On 11 Mar 2021, at 04:05, Norman.Kern wrote: > > No, I use its default features like this: ___ ceph-users mailing list -- ce

[ceph-users] Current BlueStore cache autotune (memory target) is respect media?

2021-03-15 Thread Konstantin Shalygin
Hi, Current default's (for nautilus, for example) is respect media type (hdd/ssd/hybrid) or for all OSD's current memory_target is 4GiB? Thanks, k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.

[ceph-users] Re: Email alerts from Ceph [EXT]

2021-03-18 Thread Konstantin Shalygin
Just use ceph-dash and chec_ceph_dash [1] [1] https://github.com/Crapworks/check_ceph_dash k Sent from my iPhone > On 18 Mar 2021, at 12:02, Matthew Vernon wrote: > > I'm afraid we used our existing Nagios infrastructure for checking HEALTH > status, and have a script that runs daily to rep

[ceph-users] Re: Device class not deleted/set correctly

2021-03-25 Thread Konstantin Shalygin
rotational device or not, from kernel k Sent from my iPhone > On 25 Mar 2021, at 15:06, Nico Schottelius > wrote: > > The description I am somewhat missing is "set based on which criteria?" ___ ceph-users mailing list -- ceph-users@ceph.io To unsub

[ceph-users] Re: upgrade problem nautilus 14.2.15 -> 14.2.18? (Broken ceph!)

2021-03-25 Thread Konstantin Shalygin
Finally master is merged now k Sent from my iPhone > On 25 Mar 2021, at 23:09, Simon Oosthoek wrote: > > I'll wait a bit before upgrading the remaining nodes. I hope 14.2.19 will be > available quickly. ___ ceph-users mailing list -- ceph-users@cep

[ceph-users] Re: bug in ceph-volume create

2021-04-06 Thread Konstantin Shalygin
Philip, good suggestion! I was create ticket for that [1] [1] https://tracker.ceph.com/issues/50163 K > On 5 Apr 2021, at 23:47, Philip Brown wrote: > > You guys might consider a feature request of doing some kind of check on long > device path names getting passed in, to see if the util sho

[ceph-users] Re: Revisit Large OMAP Objects

2021-04-14 Thread Konstantin Shalygin
Run reshard instances rm And reshard your bucket by hand or leave dynamic resharding process to do this work k Sent from my iPhone > On 13 Apr 2021, at 19:33, dhils...@performair.com wrote: > > All; > > We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16). > > Initial

[ceph-users] Re: Exporting CephFS using Samba preferred method

2021-04-14 Thread Konstantin Shalygin
Hi, Actually vfs_ceph should perform better, but this method will not work with another's vfs's, like recycle bin or audit, in one stack k Sent from my iPhone > On 14 Apr 2021, at 09:56, Martin Palma wrote: > > Hello, > > what is the currently preferred method, in terms of stability and >

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-21 Thread Konstantin Shalygin
Dan, you about this issue [1] ? I was start to backfill new OSD's on 14.2.19: pool have 2048PG with 7.14G objects... avg number of PG is 3486328 objects [1] https://tracker.ceph.com/issues/47044 k > On 21 Apr 2021, at 16:37, Dan van der Ster wrote:

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-21 Thread Konstantin Shalygin
Nope, umpap is currently impossible on this clusters 😬 due client lib (guys works on update now). ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAP METAAVAIL %USE VAR PGS STATUS TYPE NAME -166 10.94385- 11 TiB 382 GiB 317 GiB 64 KiB 66 GiB 11 TiB 3.42 1.00 -

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-21 Thread Konstantin Shalygin
Just for the record - I enable this for all OSD's on this clusters k > On 21 Apr 2021, at 17:22, Igor Fedotov wrote: > > Curious if you had bluefs_buffered_io set to true when faced that? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscr

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-21 Thread Konstantin Shalygin
This is hdd or hybrids OSD's? How much obj per PG avg? k Sent from my iPhone > On 21 Apr 2021, at 17:44, Dan van der Ster wrote: > > Here's a tracker: https://tracker.ceph.com/issues/50466 > > bluefs_buffered_io is indeed enabled on this cluster, but I suspect it > doesn't help for this prec

[ceph-users] BlueFS.cc ceph_assert(bl.length() <= runway): protection against bluefs log file growth

2021-04-28 Thread Konstantin Shalygin
Hi, Recently was added [1] protection against BlueFS log growth infinite, I get assert on 14.2.19: /build/ceph-14.2.19/src/os/bluestore/BlueFS.cc: 2404: FAILED ceph_assert(bl.length() <= runway) Then OSD dead. Tracker (may be already exists?), logs is interested for this case? [1] https://

[ceph-users] Re: active+recovery_unfound+degraded in Pacific

2021-04-28 Thread Konstantin Shalygin
Hi, You should crush reweight this OSD (sde) to zero, and ceph will remap all PG to another OSD, after draining you may replace your drive k Sent from my iPhone > On 29 Apr 2021, at 06:00, Lomayani S. Laizer wrote: > > Any advice on this. Am stuck because one VM is not working now. Looks t

[ceph-users] Re: active+recovery_unfound+degraded in Pacific

2021-04-29 Thread Konstantin Shalygin
Not wrong, your drive has failed - degradation is just a signal on lower level issues k > On 29 Apr 2021, at 11:21, Lomayani S. Laizer wrote: > > My understanding of the unfound object was wrong. I thought it means the > object cant be found in all replicas _

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-11 Thread Konstantin Shalygin
Hi Ilya, > On 3 May 2021, at 14:15, Ilya Dryomov wrote: > > I don't think empty directories matter at this point. You may not have > had 12 OSDs at any point in time, but the max_osd value appears to have > gotten bumped when you were replacing those disks. > > Note that max_osd being greater

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-11 Thread Konstantin Shalygin
> On 11 May 2021, at 14:24, Ilya Dryomov wrote: > > No, as mentioned above max_osds being greater is not a problem per se. > Having max_osds set to 1 when you only have a few dozen is going to > waste a lot of memory and network bandwidth, but if it is just slightly > bigger it's not someth

[ceph-users] Re: bluefs_buffered_io turn to true

2021-05-13 Thread Konstantin Shalygin
Hi, This is not a normal, It's something different I think, like a crush changes on restart. This option will be enabled by default again in Nautilus next, so you can use it now with 14.2.19-20 k Sent from my iPhone > On 14 May 2021, at 08:21, Szabo, Istvan (Agoda) > wrote: > > Hi, > >

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
I recommend to upgrade at least to 12.2.13, for luminous even .12 and .13 is significant difference in code. k > On 14 May 2021, at 09:22, Szabo, Istvan (Agoda) > wrote: > > It is quite an older cluster, luminous 12.2.8. ___ ceph-users mailing li

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
> On 14 May 2021, at 10:50, Szabo, Istvan (Agoda) > wrote: > > Is it also normal if this buffered_ioturned on, it eats all the memory on the > system? Hmmm. > This is what actually do this option - eat all free memory as cached for bluefs speedups k __

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
Nope, kernel reserves enough memory to free on pressure, for example 36OSD 0.5TiB RAM host: totalusedfree shared buff/cache available Mem: 502G168G2.9G 18M331G472G Swap: 952M248M704M

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
It's enough, should be true now... k > On 14 May 2021, at 12:51, Szabo, Istvan (Agoda) > wrote: > > Did I do something wrong? > I set in the global config the bluefs option, and restarted ceph.target on > the osd node :/ ? > > Doe this need some special thing to apply?

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
I suggest to look into vm.min_free_kbytes kernel option, and increase it twice k > On 14 May 2021, at 13:45, Szabo, Istvan (Agoda) > wrote: > > Is there anything that should be set just to be sure oom kill not happen? Or > nothing? ___ ceph-users

[ceph-users] Re: [Suspicious newsletter] Re: bluefs_buffered_io turn to true

2021-05-14 Thread Konstantin Shalygin
> On 14 May 2021, at 14:20, Szabo, Istvan (Agoda) > wrote: > > Howmuch is yours? Mine is vm.min_free_kbytes = 90112. I use 135168 > On 14 May 2021, at 14:31, Szabo, Istvan (Agoda) > wrote: > > Yup, I just saw, should have 3GB :/ I will wait until the system goes back to > normal and will

[ceph-users] Re: BlueFS spillover detected - 14.2.16

2021-05-19 Thread Konstantin Shalygin
Hi, Toby On 19 May 2021, at 15:24, Toby Darling wrote: > > In the last couple of weeks we've been getting BlueFS spillover warnings on > multiple (>10) osds, eg > > BLUEFS_SPILLOVER BlueFS spillover detected on 1 OSD(s) > osd.327 spilled over 58 MiB metadata from 'db' device (30 GiB used o

[ceph-users] Re: ceph df: pool stored vs bytes_used -- raw or not?

2021-05-19 Thread Konstantin Shalygin
Dan, Igor Seems this wasn't backported? Get stored == used on luminous->Nautilus 14.2.21. What solution is? Find OSD's with zero bytes reported, drain->deploy it? Thanks, k Sent from my iPhone ___ ceph-users mailing list -- ceph-users@ceph.io To unsubs

[ceph-users] Re: ceph df: pool stored vs bytes_used -- raw or not?

2021-05-21 Thread Konstantin Shalygin
> On 20 May 2021, at 21:09, Igor Fedotov wrote: > > Perhaps you're facing a different issue, could you please share "ceph osd > tree" output? Here: https://pastebin.com/bic4v5Xy Thanks, k ___ ceph-users mailing list

[ceph-users] Re: ceph osd df size shows wrong, smaller number

2021-05-21 Thread Konstantin Shalygin
> On 21 May 2021, at 12:17, Rok Jaklič wrote: > > There isnt any manual method for bluestore. Your block is 107374182400, so report is current For bluestore is better to use "ceph-volume lvm batch /dev/sdb" command k ___ ceph-users mailing list --

[ceph-users] Re: How to organize data in S3

2021-05-23 Thread Konstantin Shalygin
By default when account is creating max buckets for account is 1000. You can change this default and any time set max buckets setting for any already created account k Sent from my iPhone > On 24 May 2021, at 08:52, Michal Strnad wrote: > > Thank you. So we can create millions of buckets as

[ceph-users] Re: summarized radosgw size_kb_actual vs pool stored value doesn't add up

2021-05-25 Thread Konstantin Shalygin
Hi, > On 25 May 2021, at 10:23, Boris Behrens wrote: > > I am still searching for a reason why these two values differ so much. > > I am currently deleting a giant amount of orphan objects (43mio, most > of them under 64kb), but the difference get larger instead of smaller. When user trough AP

[ceph-users] Re: Turning on "compression_algorithm" old pool with 500TB usage

2021-06-06 Thread Konstantin Shalygin
The same. You need to rewrite all your data k Sent from my iPhone > On 4 Jun 2021, at 15:08, mhnx wrote: > > Hello. I have a erasure pool and I didn't turn on compression at the > beginning. > Now I'm writing new type of very small data and overhead is becoming an issue. > I'm thinking to t

[ceph-users] Re: ceph df: pool stored vs bytes_used -- raw or not?

2021-06-08 Thread Konstantin Shalygin
Stored==used was resolved for this cluster. Actually problem is what you was discover in previous year: zero's. Filestore lack of META counter - always zero. When I purged last drained OSD from cluster - statistics becomes to normal immediately Thanks, k > On 20 May 2021, at 21:22, Dan van

[ceph-users] Re: OSD bootstrap time

2021-06-08 Thread Konstantin Shalygin
Hi, You mean new fresh deployed OSD's or old just restarted OSD's? Thanks, k Sent from my iPhone > On 8 Jun 2021, at 23:30, Jan-Philipp Litza wrote: > > recently I'm noticing that starting OSDs for the first time takes ages > (like, more than an hour) before they are even picked up by the mo

[ceph-users] Re: OSD bootstrap time

2021-06-09 Thread Konstantin Shalygin
This is new min_alloc_size for bluestore. 4K mkfs required more time and process is single threaded I think It's normal k > On 9 Jun 2021, at 14:21, Jan-Philipp Litza wrote: > > I mean freshly deployed OSDs. Restarted OSDs don't exhibit that behavior.

[ceph-users] Re: ceph df: pool stored vs bytes_used -- raw or not?

2021-06-15 Thread Konstantin Shalygin
Fired https://tracker.ceph.com/issues/51223 k > On 9 Jun 2021, at 13:20, Igor Fedotov wrote: > > Should we fire another ticket for that? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Konstantin Shalygin
Hi, > On 16 Jun 2021, at 01:33, Petr wrote: > > I've upgraded my Ubuntu server from 18.04.5 LTS to Ubuntu 20.04.2 LTS via > 'do-release-upgrade', > during that process ceph packages were upgraded from Luminous to Octopus and > now ceph-mon daemon(I have only one) won't start, log error is: > "

[ceph-users] Re: ceph osd df return null

2021-06-16 Thread Konstantin Shalygin
Perhaps this OSD's offline / out? Please, upload your `ceph osd df tree` & `ceph osd tree` to pastebin Thanks, k > On 16 Jun 2021, at 10:43, julien lenseigne > wrote: > > when i do ceph osd df, > > some osd returns null size. For example : > > 0 hdd 7.27699 1.0 0B 0B

[ceph-users] Re: Likely date for Pacific backport for RGW fix?

2021-06-16 Thread Konstantin Shalygin
Hi, Cory was mark the backport PR as ready, after QA fix should be merged k Sent from my iPhone > On 16 Jun 2021, at 18:05, Chris Palmer wrote: > > The only thing that has bitten us is https://tracker.ceph.com/issues/50556 > which prevents a multipart

[ceph-users] Re: Ceph monitor won't start after Ubuntu update

2021-06-16 Thread Konstantin Shalygin
You can just use your mondb data in another (bionic?) Ubuntu to make a upgrade. Then return upgraded data to your focal Ubuntu. Cheers, k Sent from my iPhone > On 16 Jun 2021, at 19:39, Petr wrote: > > I would like to, but there is no Nautilus packages for Ubuntu 20(focal). _

[ceph-users] Re: how to set rgw parameters in Pacific

2021-06-19 Thread Konstantin Shalygin
Hi, If you need to set setting only for one client, use client prefix, if for all clients - use global Cheers, k Sent from my iPhone > On 19 Jun 2021, at 21:43, Adrian Nicolae wrote: > > I have some doubts regarding the best way to change some rgw parameters in > Pacific. > > Let's say I

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Konstantin Shalygin
140 LV's actually, in hybrid OSD case Cheers, k Sent from my iPhone > On 22 Jun 2021, at 12:56, Thomas Roth wrote: > > I was going to try cephfs on ~10 servers with 70 HDD each. That would make > each system having to deal with 70 OSDs, on 70 LVs?

[ceph-users] Re: How can I check my rgw quota ? [EXT]

2021-06-23 Thread Konstantin Shalygin
Or you can use radosgw_usage_exporter [1] and provide some graphs to end users [1] https://github.com/blemmenes/radosgw_usage_exporter k Sent from my iPhone > On 23 Jun 2021, at 11:59, Matthew Vernon wrote: > > > I think you can't via S3; we collect these data and publish them out-of-band

[ceph-users] Re: radosgw user "check_on_raw" setting

2021-06-28 Thread Konstantin Shalygin
Hi, I think is not possible to make this with CLI, like for placements configs Actually this is settings of period: "period_config": { "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0,

[ceph-users] Re: RocksDB degradation / manual compaction vs. snaptrim operations choking Ceph to a halt

2021-07-07 Thread Konstantin Shalygin
If your PG is not served millions of objects in on PG - is not your problem... k Sent from my iPhone > On 7 Jul 2021, at 11:32, Christian Rohmann > wrote: > > I know improvements in this regard are actively worked on for pg removal, i.e. > > * https://tracker.ceph.com/issues/47174 > ** ht

[ceph-users] Re: list-type=2 requests

2021-07-08 Thread Konstantin Shalygin
This is default query in aws-sdk for a couple of years What is your Ceph version? k > On 8 Jul 2021, at 11:23, Szabo, Istvan (Agoda) wrote: > > Hi, > > Is there anybody know about list-type=2 request? > GET /bucket?list-type=2&max-keys=2 > > We faced yesterday the 2nd big objectstore clust

[ceph-users] Re: RGW Dedicated clusters vs Shared (RBD, RGW) clusters

2021-07-08 Thread Konstantin Shalygin
What is your object size will be? 70T RAW is such small, I think is better add hardware to your RBD cluster and run object service here k Sent from my iPhone > On 8 Jul 2021, at 14:17, gustavo panizzo wrote: > > Hello > > I have some experience with RBD clusters (for use with KVM/libvirt)

[ceph-users] Re: RGW Dedicated clusters vs Shared (RBD, RGW) clusters

2021-07-09 Thread Konstantin Shalygin
> On 9 Jul 2021, at 11:01, gustavo panizzo wrote: > > at the beginning the object size will be 50M to 1G but after some time > anything goes Than you can do nothing, because default max alloc size is 4K, and it cant be lower In previous releases it was need if your object size mostly lower th

[ceph-users] Re: RGW performance as a Veeam capacity tier

2021-07-10 Thread Konstantin Shalygin
Veeam normally produced 2-4Gbit/s to S3 in our case k Sent from my iPhone > On 10 Jul 2021, at 08:36, Nathan Fish wrote: > > No, that's pretty slow, you should get at least 10x that for > sequential writes. Sounds like Veeam is doing a lot of sync random > writes. If you are able to add a bi

[ceph-users] Re: resharding and s3cmd empty listing

2021-07-13 Thread Konstantin Shalygin
Hi, What is your Ceph version? k Sent from my iPhone > On 12 Jul 2021, at 20:41, Jean-Sebastien Landry > wrote: > > Hi everyone, something strange here with bucket resharding vs. bucket > listing. > > I have a bucket with about 1M objects in it, I increased the bucket quota > from 1M to

[ceph-users] Re: How to size nvme or optane for index pool?

2021-07-14 Thread Konstantin Shalygin
What you mean? You can check pool usage via 'ceph df detail' output Sent from my iPhone > On 15 Jul 2021, at 07:53, Szabo, Istvan (Agoda) > wrote: > > How can I know which size of the nvme drive needed for my index pool? At the > moment I'm using 6x1.92TB NVME (overkill) but I have no idea ho

[ceph-users] Re: Files listed in radosgw BI but is not available in ceph

2021-07-17 Thread Konstantin Shalygin
Boris, what is your Ceph version? k > On 17 Jul 2021, at 11:04, Boris Behrens wrote: > > I really need help with this issue. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Procedure for changing IP and domain name of all nodes of a cluster

2021-07-21 Thread Konstantin Shalygin
Hi, > On 21 Jul 2021, at 10:53, Burkhard Linke > wrote: > > One client with special needs is openstack cinder. The database entries > contain the mon list for volumes Another question: do you know where is saved this list? I mean, how to see the current records via cinder command? Thanks,

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-21 Thread Konstantin Shalygin
Hi Lucian, > On 29 Jun 2021, at 17:02, Lucian Petrut > wrote: > > It’s a compatibility issue, we’ll have to update the Windows Pacific build. > > Sorry for the delayed reply, hundreds of Ceph ML mails ended up in my spam > box. Ironically, I’ll have to thank Office 365 for that :). Can you p

[ceph-users] Re: unable to map device with krbd on el7 with ceph nautilus

2021-07-23 Thread Konstantin Shalygin
EL7 client is still compatible with Nautilus Sent from my iPhone > On 24 Jul 2021, at 00:58, cek+c...@deepunix.net wrote: > > Is that because the kernel module is too old? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email t

[ceph-users] Re: How to set retention on a bucket?

2021-07-26 Thread Konstantin Shalygin
> On 26 Jul 2021, at 08:05, Szabo, Istvan (Agoda) > wrote: > > Haven't really found how to set the retention on a s3 bucket for a specific > day. Is there any ceph document about it? Is not possible to set retention on specific days, only at +days from putObject day. LC policy is highly doc

[ceph-users] Re: Handling out-of-balance OSD?

2021-07-28 Thread Konstantin Shalygin
ceph pg ls-by-osd k Sent from my iPhone > On 28 Jul 2021, at 12:46, Manuel Holtgrewe wrote: > > How can I find out which pgs are actually on osd.0? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ce

[ceph-users] Re: rbd object mapping

2021-08-07 Thread Konstantin Shalygin
Object map show where your object with any object name will be placed in defined pool with your crush map, and which of osd will serve this PG. You can type anything in object name - and the the future placement or placement of existing object - this how algo works. 12800 means that your 100GiB

  1   2   3   4   5   >