Here's an example of an OSD with separate block.db:
---snip---
osd-db-0cd3d532-a03a-4a17-bc0c-ae3e8c710ace
ceph.block_device=/dev/ceph-63cf931a-f7a8-4ab8-bdd2-4c01c8e10f4d/osd-block-a30f85ab-b2b9-4f75-a1f9-48c2ef6db1c1,ceph.block_uuid=8wWfCT-4aQw-2VKR-I1Lf-a0e3-rRRA-v3pLqL,ceph.cephx_lockbox_sec
hello
many thx for the time you take helping me on this.
I restarted one of the backup and now the space usage for cephfs meta
data goes from 17Go to 70Go but the hints you give me seems to not help
here.
cephfs-metadata/mds1_openfiles.0 mtime 2021-04-06 18:27:08.00, size
0
cephfs-metadata/m
thanks for the details. this is a regression from changes to the
datalog storage for multisite - this -5 error is coming from the new
'fifo' backend. as a workaround, you can set the new
'rgw_data_log_backing' config variable back to 'omap'
Adam has fixes already merged to the pacific branch; be a
Where does it read it from?
does it keep it in the block.db lv, or the block dev lv, or both?
I removed the vg from the block dev and did wipefs, if I recall.
- Original Message -
From: "Eugen Block"
To: "Philip Brown"
Cc: "ceph-users"
Sent: Tuesday, April 6, 2021 9:06:50 AM
Subje
Did you recreate the OSD with the same UUID? I'm guessing ceph-volume
just reads LV tags and thinks there's a block.db for that OSD. Before
creating the OSD you should also wipe the LV containing the block.db
for the failed OSD.
Zitat von Philip Brown :
It was up and running.
I should
It was up and running.
I should mention that the situation came about when I was testing OSD rebuild.
So I had a previously autogenerated hybrid OSD, with SSD db, and HDD block dev.
I then stopped the OSD, wiped HDD, did "ceph osd rm" and ceph auth rm.
and then rebuilt with ceph-volume.. w
I just did this recently. The only painful part was using
"monmaptool" to change the monitor IP addresses on disk. Once you do
that, and change the monitor IPs in ceph.conf everywhere, it should
come up just fine.
Mark
On Tue, Apr 6, 2021 at 8:08 AM Jean-Marc FONTANA
wrote:
>
> Hello everyon
Thanks Robert,
So if the DB device is _not_ empty due to existing running OSDs, auto
orchestration will not be an option until all OSDs on that DB device are
converted.
That makes sense.
FWIW, I did confirm that this process
https://tracker.ceph.com/issues/46691 does work for replacing an O
Hello everyone,
We have installed a Nautilus Ceph cluster with 3 monitors, 5 osd and 1
RGW gateway.
It works but now, we need to change the IP addresses of these machines
to put them in DMZ.
Are there any recommandations to go about doing this ?
Best regards,
Hi,
The DB device needs to be empty for an automatic OSD service. The service will
then create N db slots using logical volumes and not partitions.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
Hi,
On 4/6/21 2:20 PM, Olivier AUDRY wrote:
hello
now backup is running since 3hours and cephfs metadata goes from 20G to
479Go...
POOL
ID STORED OBJECTS USED %USED MAX AVAIL
cephfs-metadata12 479 GiB 642.26k1.4 TiB
18.79 2.0 TiB
cephfs-data0
Before I upgrade our Nautilus ceph to Octopus, I would like to make sure
I am able to replace existing OSDs when they fail. However, I have not
been able to create an OSD in Octopus with the layout we are using in
Nautilus. I am testing this on a VM cluster so as not to touch any
production s
Philip, good suggestion! I was create ticket for that [1]
[1] https://tracker.ceph.com/issues/50163
K
> On 5 Apr 2021, at 23:47, Philip Brown wrote:
>
> You guys might consider a feature request of doing some kind of check on long
> device path names getting passed in, to see if the util sho
Is the OSD up and running?
Do you see IO on the dedicated block.db LV when running a 'ceph tell
osd.7 bench'? It sounds very strange, I haven't seen that.
Zitat von Philip Brown :
I am in a situation where I see conflicting information.
On the one hand,
ls -l /var/lib/ceph/osd/ceph-7
shows
hello
now backup is running since 3hours and cephfs metadata goes from 20G to
479Go...
POOL
ID STORED OBJECTS USED %USED MAX AVAIL
cephfs-metadata12 479 GiB 642.26k1.4 TiB
18.79 2.0 TiB
cephfs-data013 2.9
hi,all:
I use 15.2.10 ceph cluster with ubuntu 18.04.I create a rbd device and map
into a host(ubuntu 18.04 ceph 15.2.10),
When i type mkfs.fs -f /dev/rbd0 it is hangs,But it is ok that write data use
"rados -p {poolname} put obj myfile".
Has anyone encountered this kind of problem?
Dear Cepher,
My client has 10 volume, each volume was assigned 8192 PGs, in total 81920 PGs.
The ceph is with Luminous Bluestore. During a power outage, the cluster
restarted, and we observed that OSD peering consumed a lot of CPU and memory
resources, evne leading to some OSD flappings.
My q
Good morning.
I have a bucket and it has 50M object in it. The bucket created with
multisite sync and that is the masterzone and only zone now.
After a health check, I saw weird objects in pending attr state.
I've tried to remove them with "radosgw-admin object rm --bypas-gc"
but I coldn't delete
18 matches
Mail list logo