[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Marc Roos
> In the past I see some good results (benchmark & latencies) for MySQL and PostgreSQL. However, I've always used > 4MB object size. Maybe i can get much better performance on smaller object size. Haven't tried actually. Did you tune mysql / postgres for this setup? Did you have a default ce

[ceph-users] RGW with HAProxy

2020-10-19 Thread Seena Fallah
Hi When I use haproxy with keep-alive mode to rgws, haproxy gives many responses like this! Is there any problem with keep-alive mode in rgw? Using nautilus 14.2.9 with beast frontend. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send

[ceph-users] Re: Ceph OIDC Integration

2020-10-19 Thread technical
Dear Pritha, thanks a lot for your feedback and apologies for missing your comment about the backporting. Would you have a rough estimate on the next Octopus release by any chance? On another note on the same subject, would you be able to give us some feedback on how the users will be created i

[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Gencer W . Genç
Yes, I had to tune some settings on PostgreSQL. Especially on: synchronous_commit = off I have a default RBD settings. Do you have any recommendation? Thanks, Gencer. On 19.10.2020 12:49:51, Marc Roos wrote: > In the past I see some good results (benchmark & latencies) for MySQL and PostgreS

[ceph-users] Re: multiple OSD crash, unfound objects

2020-10-19 Thread Michael Thomas
I left osd.41 out over the weekend, and put it back in this morning. After the recovery finished, here are the results of the ops queries: ceph daemon osd.41 ops: https://pastebin.com/keYBMVbH ceph daemon osd.41 dump_historic_slow_ops https://pastebin.com/axbZNh7M Yes, the OSD was still out

[ceph-users] Re: multiple OSD crash, unfound objects

2020-10-19 Thread Michael Thomas
Hi Frank, I'll give both of these a try and let you know what happens. Thanks again for your help, --Mike On 10/16/20 12:35 PM, Frank Schilder wrote: Dear Michael, this is a bit of a nut. I can't see anything obvious. I have two hypotheses that you might consider testing. 1) Problem with 1

[ceph-users] OSD host count affecting available pool size?

2020-10-19 Thread Dallas Jones
Hi, Ceph brain trust: I'm still trying to wrap my head around some capacity planning for Ceph, and I can't find a definitive answer to this question in the docs (at least one that penetrates my mental haze)... Does the OSD host count affect the total available pool size? My cluster consists of th

[ceph-users] Re: OSD host count affecting available pool size?

2020-10-19 Thread Dallas Jones
Ah, this sentence in the docs I've overlooked before: When you deploy OSDs they are automatically placed within the CRUSH map under a host node named with the hostname for the host they are running on. This, combined with the default CRUSH failure domain, ensures that replicas or erasure code shar

[ceph-users] Ceph Octopus

2020-10-19 Thread Amudhan P
Hi, I have installed Ceph Octopus cluster using cephadm with a single network now I want to add a second network and configure it as a cluster address. How do I configure ceph to use second Network as cluster network?. Amudhan ___ ceph-users mailing li

[ceph-users] Re: OSD host count affecting available pool size?

2020-10-19 Thread Eugen Block
Hi, I'm not sure I understand what your interpretation is. If you have 30 OSDs each with 1TB you'll end up with 30TB available (raw) space, no matter if those OSDs are spread across 3 or 10 hosts. The crush rules you define determine how many replicas are going to be distributed across your O

[ceph-users] Bucket notification is working strange

2020-10-19 Thread Krasaev
Hi everyone, I asked the same question in stackoverflow, but will repeat here. I configured bucket notification using a bucket owner creds and when the owner does actions I can see new events in a configured endpoint(kafka actually). However, when I try to do actions in the bucket, but with anot

[ceph-users] Re: Mon DB compaction MON_DISK_BIG

2020-10-19 Thread Anthony D'Atri
I hope you restarted those mons sequentially, waiting between each for the quorum to return. Is there any recovery or pg autoscaling going on? Are all OSDs up/in, ie. are the three numbers returned by `ceph osd stat` the same? — aad > On Oct 19, 2020, at 7:05 PM, Szabo, Istvan (Agoda) > wro

[ceph-users] Re: Mon DB compaction MON_DISK_BIG

2020-10-19 Thread Anthony D'Atri
> > Hi, > > Yeah, sequentially and waited for finish, and it looks like it is still doing > something in the background because now it is 9.5GB even if it tells > compaction done. > I think the ceph tell compact initiated harder so not sure how far it will go > down, but looks promising. When

[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Gencer W . Genç
Marc Roos wrote: > >   In the past I see some good results (benchmark & > > latencies) for MySQL  and PostgreSQL. However, I've always used  > >   4MB object size. Maybe i can get much better > > performance on smaller  object size. Haven't tried actually. >  > Did you tune mysql / postgres for thi

[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Brian Topping
Another option is to let PosgreSQL do the replication with local storage. There are great reasons for Ceph, but databases optimize for this kind of thing extremely well. With replication in hand, run snapshots to RADOS buckets for long term storage. > On Oct 17, 2020, at 7:28 AM, Gencer W. Gen

[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Dave Hall
Another path that we have been investigating is to use some NVMe on the database machine as a cache (bcache, cachefs, etc). Several TB of U.2 drives in a striped-LVM should enhance performance for 'hot' data and cover for the issues of storing a large DB in Ceph. Note that we haven't tried thi

[ceph-users] Mon DB compaction MON_DISK_BIG

2020-10-19 Thread Szabo, Istvan (Agoda)
Hi, I've received a warning today morning: HEALTH_WARN mons monserver-2c01,monserver-2c02,monserver-2c03 are using a lot of disk space MON_DISK_BIG mons monserver-2c01,monserver-2c02,monserver-2c03 are using a lot of disk space mon.monserver-2c01 is 15.3GiB >= mon_data_size_warn (15GiB)

[ceph-users] Re: Mon DB compaction MON_DISK_BIG

2020-10-19 Thread Szabo, Istvan (Agoda)
Hi, Yeah, sequentially and waited for finish, and it looks like it is still doing something in the background because now it is 9.5GB even if it tells compaction done. I think the ceph tell compact initiated harder so not sure how far it will go down, but looks promising. When I sent the email

[ceph-users] Re: Mon DB compaction MON_DISK_BIG

2020-10-19 Thread Szabo, Istvan (Agoda)
Okay, thank you very much. From: Anthony D'Atri Sent: Tuesday, October 20, 2020 9:32 AM To: Szabo, Istvan (Agoda) Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: Mon DB compaction MON_DISK_BIG Email received from outside the company. If in doubt don't