Hi,
You should erase any partitions or LVM groups on the disks and restart OSD
hosts so CEPH would be able to detect drives. I usually just do 'dd
if=/dev/zero of=/dev/ bs=1M count=1024' and then reboot host to make
sure it will definitely be clean. Or, alternatively, you can zap the
drives, or you
Hi Alex,
Switch autoscaler to 'scale-up' profile, it will keep PGs at minimum and
increase them as required. Default one is 'scale-down'.
Regards,
Yury.
On Tue, Nov 2, 2021 at 3:31 AM Alex Petty wrote:
> Hello,
>
> I’m evaluating Ceph as a storage option, using ceph version 16.2.6,
> Pacific st
e failure
> domain.
>
> From my experience if a host (even an osd) is temporary down you don’t
> want to recover.
> It will generate load to recover but also once host is back on cluster to
> put back PG to their original OSD.
>
> -
> Etienne Menguy
> etienne.men...@cro
Hi everyone,
I have a CEPH cluster with 3 MON/MGR/MDS nodes, 3 OSD nodes each hosting
two OSDs (2 HDDs, 1 OSD per HDD). My pools are configured with a replica x
3 and my osd_pool_default_size is set to 2. So I have 6 total OSDs and 3
hosts for OSDs.
My CRUSH map is plain simple - root, then 3 host
You can downgrade any CEPH packages if you want to. Just specify the number
you'd like to go to.
On Wed, Oct 27, 2021 at 12:36 AM Boris Behrens wrote:
> Hi,
> I just added new storage to our s3 cluster and saw that ubuntu didn't
> priortize the nautilus package over the octopus package.
>
> Now
te:
> Thanks for answer.
>
> For now my pool when I have disk of VMs
>
> of PGs: 510
>
> Optimal # of PGs is 256
>
> Can I change the PGs from 510 to 256 ?
>
>
> 26.10.2021 12:21, Yury Kirsanov пишет:
> > Yes, you can, it will rescale PGs same way a
Yes, you can, it will rescale PGs same way as autoscaler works. Also you
can change policy to start from low number of PGs.
Regards,
Yury.
On Tue, 26 Oct 2021, 20:20 Сергей Цаболов, wrote:
> Hello to community!
>
> I need to advise about SPECIFYING EXPECTED POOL SIZE
> In the page
> https://do
in the SAML2 configuration. Maybe I’m
> mistaken, but I think the SAML2 cert is separate from the regular HTTPS
> cert?
>
>
>
> *From:* Yury Kirsanov
> *Sent:* Monday, October 25, 2021 11:52 AM
> *To:* Edward R Huyer
> *Cc:* ceph-users@ceph.io
> *Subject:* Re
Hi Edward,
You need to set configuration like this, assuming that certificate and key
are on your local disk:
ceph mgr module disable dashboard
ceph dashboard set-ssl-certificate -i .crt
ceph dashboard set-ssl-certificate-key -i .key
ceph config-key set mgr/cephadm/grafana_crt -i .crt
ceph config-
Hi E Taka,
Thanks a lot for sharing your script, yes, that could be one of the
solutions to fix that issue! But strange thing, on another CEPH test
deployment I somehow get valid FQDNs reported by 'ceph mgr services'
instead of IP addresses. I've compared their configurations and it seems to
be exa
passthrough modes). If you use HAProxy, there's even a sample config file
> for you to use (
> https://docs.ceph.com/en/pacific/mgr/dashboard/#haproxy-example-configuration
> ).
>
> Kind Regards,
> Ernesto
>
>
> On Thu, Oct 21, 2021 at 11:11 AM Yury Kirsanov
> wr
Hi,
I've just installed CEPH and run into issues with Dashboard URL. It's
always getting re-written to an IP address and this causes issues with
HTTPS as it only contains a wildcard certificate in my case. I have
bootstrapped the cluster with '--allow-fqdn-hostname' option, tried to set
host addres
12 matches
Mail list logo