Hi,
All rbd features was added to ceph-csi in last year [1]
You can add object-map feature in your options like any others:
```
imageFeatures: layering,exclusive-lock,object-map,fast-diff,deep-flatten
mapOptions: ms_mode=prefer-crc
```
k
[1] https://github.com/ceph/ceph-csi/pull/2514
Hi,
we want to provision OSDs on nodes with 36 18TB HDDs, their RocksDBs should be
stored on 960GB SSDs (6 DB slots per OSD).
The is Ceph version 16.2.7 from RedHat Ceph Storage 5.1.
When using this YAML service specfication:
service_type: osd
service_id: HDD-OSDs
placement:
label: 'hddosd'
Hi Yuri,
Thanks for the reminder:
Dashboard approved (CC Nizam)!
Kind Regards,
Ernesto
On Wed, Jul 27, 2022 at 8:30 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
>
>
> On Wed, Jul 27, 2022 at 5:02 AM Gregory Farnum wrote:
>
>> On Tue, Jul 26, 2022 at 3:41 PM Yuri Weinstein
>
Hi,
On 7/27/22 14:08, Robert Sander wrote:
> we want to provision OSDs on nodes with 36 18TB HDDs, their RocksDBs
should be stored on 960GB SSDs (6 DB slots per OSD).
>
> Running with only 6 HDDs and one SSD yields the desired result:
To me this look like a recent bug that I stumbled upon in this
On 27.07.22 14:23, Arthur Outhenin-Chalandre wrote:
To me this look like a recent bug that I stumbled upon in this tracker
https://tracker.ceph.com/issues/56031. It's a pretty bad regression
IMO... The fix is already available (and I just opened the backports
this morning).
Yep, that looks lik
Hello everyone,
After upgrading the monitors and mgrs to octopus (15.2.16) the system told me
that some pools did not have the correct pg_nums, some of them above the
optimum and one of them the busiest below 256 of 1024 required.
[root@cephmon01 ~]# ceph versions
{
"mon": {
"ceph version
Hi,
this seems to be another example why a pool size = 2 is a bad idea.
This has been discussed so many times...
- If we stop osd.131 the PG becomes inactive and down (like it is
the only osd containing the objects): Reduced data availability: 1
pg inactive, 1 pg down
Because it is, cep
Hi,
I installed a fresh cluster using cephadm:
- bootstrapped one node
- extended it using to 3 monitor nodes, each running mon + mgr using a
spec file
- added 12 OSDs hosts to the spec file with the following disk rules:
~~~
service_type: osd
service_id: osd_spec_hdd
placement:
label: osd
Hi Daniel,
This issue seems to be showing up in 17.2.2, details in
https://tracker.ceph.com/issues/55304. We are currently in the process
of validating the fix https://github.com/ceph/ceph/pull/47270 and
we'll try to expedite a quick fix.
In the meantime, we have builds/images of the dev version
Hi Neha,
thanks for the quick response. Sorry for that stupid question: to use
that image I pull the image on the machine and then change
/var/lib/ceph/${clusterid}/mgr.${unit}/unit.image and start the service?
Thanks,
Daniel
Am 27.07.22 um 17:23 schrieb Neha Ojha:
Hi Daniel,
This issue s
the unit.image file is just there for cpehadm to look at as part of
gathering metadata I think. What you'd want to edit is the unit.run file
(in the same directory as the unit.image). It should have a really long
line specifying a podman/docker run command and somewhere in there will be
"CONTAINER_
yeah, that works if there is a working mgr to send the command to. I was
assuming here all the mgr daemons were down since it was a fresh cluster so
all the mgrs would have this bugged image.
On Wed, Jul 27, 2022 at 12:07 PM Vikhyat Umrao wrote:
> Adam - or we could simply redeploy the daemon wi
Hi,
thanks, that worked. I deployed the first MGR manually and the others
using the orchestrator.
Thank you so much.
Daniel
Am 27.07.22 um 18:23 schrieb Adam King:
yeah, that works if there is a working mgr to send the command to. I was
assuming here all the mgr daemons were down since it w
You're supposed to upgrade the mons first...
https://docs.ceph.com/en/quincy/releases/pacific/#upgrading-non-cephadm-clusters
Maybe try downgrading the mgrs back to Octopus? That's a bit of a scary
situation.
Tyler
On Wed, Jul 27, 2022, 1:24 PM wrote:
> Currently running Octopus 15.2.16, tryin
On Wed, Jul 27, 2022 at 12:40 AM Yuri Weinstein wrote:
>
> Ack
>
> We need to get all approvals and resolve ceph-ansbile issue.
The primary cause of the issues with ca is that octopus was pinned to
the stable_6.0 branch of ca for octopus should be using stable_5.0
according to https://docs.ceph.c
On Wed, Jul 27, 2022 at 10:24 AM wrote:
> Currently running Octopus 15.2.16, trying to upgrade to Pacific using
> cephadm.
>
> 3 mon nodes running 15.2.16
> 2 mgr nodes running 16.2.9
> 15 OSD's running 15.2.16
>
> The mon/mgr nodes are running in lxc containers on Ubuntu running docker
> from th
What actual hosts are meant to have a mgr here? The naming makes it look as
if it thinks there's a host "ceph01" and a host "cephadm" and both have 1
mgr. Is that actually correct or is that aspect also messed up?
Beyond that, you could try manually placing a copy of the cephadm script on
each hos
17 matches
Mail list logo