Hi,
What was the error it threw? Did you intentionally set it up for HTTP? If
you're not using a L7 load balancer, you can still configure a reverse
proxy with HTTPS in both SSL passthrough and SSL termination modes, so no
need to turn HTTPS off.
By default the Ceph Dashboard runs with HTTPS (844
Den mån 15 nov. 2021 kl 10:18 skrev MERZOUKI, HAMID :
> Thanks for your answers Janne Johansson,
> "I'm sure you can if you do it even more manually with ceph-volume, but there
> should seldom be a need to"
> Why do you think "there should seldom be a need to" ?
I meant this as a response to how
Hi,
it's not entirely clear how your setup looks like, are you trying to
setup multiple RGW containers on the same host(s) to serve multiple
realms or do you have multiple RGWs for that?
You can add a second realm with a spec file or via cli (which you
already did). If you want to create mu
Hi Ernesto. Thanks sooo much for the reply. I am actually new to ceph and
you are right. I would try that out right now. But please I wanted to know
if using a reverse proxy method to map ceph Dashboard to a domain name is
the way to go?
Lastly, I setup my sub domain name using CloudFlare. Wen I ru
Thank you!
It is hard for me to find that particular model. Kingston DC1000B is
readily available and not very expensive. Would this work well?
https://www.kingston.com/us/ssd/dc1000b-data-center-boot-ssd
It is marketed as a boot disk. Would it be OK for a home lab?
Regards,
Varun Priolkar
On
Hi Varun,
I'm not an expert into SSD drives, but since you wish to build a home lab,
ceph OSDs just need a "block device" to set up the bluestore on it.
So any SSD that's recognized by your system should work fine.
On Mon, Nov 15, 2021 at 3:34 PM Varun Priolkar wrote:
> Thank you!
>
> It is har
You can use also consumer drives considering that is an homelab.
Otherwise try to find seagate nytro xm1441 or xm1440.
Mario
Il giorno lun 15 nov 2021 alle ore 14:59 Eneko Lacunza
ha scritto:
> Hi Varun,
>
> That Kingston DC grade model should work (well enough at least for a
> home lab), it has
Hello,
Is anybody else hitting the ceph_assert(is_primary()) in
PrimaryLogPG::on_local_recover [1] recurringly when upgrading?
I’ve been hit with this multiple times now on Octopus and it just very
annoying, both on 15.2.11 and 15.2.15
Been trying to collect as much information as possible over
Hi Innocent,
Yes, a reverse proxy should work and in general it's not a bad idea when
you're exposing Ceph Dashboard to a public network. You'll also have to
manually update the "GRAFANA_FRONTEND_API_URL" option ("ceph dashboard
set-grafana-frontend-api-url ") with the public facing URL (instead o
Hi Luis,
On Mon, Nov 15, 2021 at 4:57 AM Luis Domingues wrote:
>
> Hi,
>
> We are testing currently testing the mclock scheduler in a ceph Pacific
> cluster. We did not test if heavily, but at first glance it looks good on our
> installation. Probably better than wqp. But we still have a few qu
Hi,
Compaction can block reads, but on the write path you should be able to
absorb a certain amount of writes via the WAL before rocksdb starts
throttling writes. The larger and more WAL buffers you have, the more
writes you can absorb, but bigger buffers also take more CPU to keep in
sorte
Awesome, thanks Ernesto! It's working now.
On Mon, 15 Nov 2021, 17:04 Ernesto Puerta, wrote:
> Hi Innocent,
>
> Yes, a reverse proxy should work and in general it's not a bad idea when
> you're exposing Ceph Dashboard to a public network. You'll also have to
> manually update the "GRAFANA_FRONTE
Hi everyone,
This event is happening on November 18, 2021, 15:00-16:00 UTC - this
is an hour later than what I had sent in my earlier email (I hadn't
accounted for daylight savings change, sorry!), the calendar invite
reflects the same.
Thanks,
Neha
On Thu, Oct 28, 2021 at 11:53 AM Neha Ojha wr
I upgraded all the OSDs + mons to Pacific 16.2.6
All PGs have been active+clean for the last days, but memory is still quite
high:
"osd_pglog": {
"items": 35066835,
"bytes": 3663079160 (3.6 GB)
},
"buffer_anon": {
Okay, I traced one slow op through the logs, and the problem was that the
PG was laggy. That happened because of the osd.122 that you stopped, which
was marked down in the OSDMap but *not* dead. It looks like that happened
because the OSD took the 'clean shutdown' path instead of the fast stop.
Hi,
Couldn't init storage provider (RADOS)
I usually see this when my rgw config is wrong, can you share your rgw
spec(s)?
Zitat von J-P Methot :
After searching through logs and figuring out how cephadm works,
I've figured out that when cephadm tries to create the new systemd
servic
16 matches
Mail list logo