Hello Gert,
I recreated the self signed certificate.
SELinux was disabled and I temporarely disabled the firewall.
It still doesn't work and there is no entry in journalctl -f.
Somewhere there is still something from the previous nautilus or centos7
installation, causing this problem.
I think
Hi,
So actually you’ve created in the DB VG many lv for the OSDs? This is that I
want to avoid actually, because if some of the osds are not is use it is still
holding the space, isn’t it?
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Serv
Hi Manuel,
My replica is 2, hence about 10TB of unaccounted usage.
Andrei
- Original Message -
> From: "EDH - Manuel Rios"
> To: "Andrei Mikhailovsky"
> Sent: Tuesday, 28 April, 2020 23:57:20
> Subject: RE: rados buckets copy
> Is your replica x3? 9x3 27... plus some overhead rounded
Hello,
I have a problem with radosgw service where the actual disk usage (ceph df
shows 28TB usage) is way more than reported by the radosgw-admin bucket stats
(9TB usage). I have tried to get to the end of the problem, but no one seems to
be able to help. As a last resort I will attempt to co
On Mon, Apr 27, 2020 at 11:21 AM Patrick Dowler wrote:
>
> I am trying to manually create a radosgw instance for a small development
> installation. I was able to muddle through and get a working mon, mgr, and
> osd (x2), but the docs for radosgw are based on ceph-deploy which is not
> part of the
This issue did subside after restarting the original primary daemon
and failing back to it. I've since enabled multi-MDS and latencies
overall have decreased even further.
Thanks for your assistance.
On Wed, Apr 15, 2020 at 8:32 AM Josh Haft wrote:
>
> Thanks for the assistance.
>
> I restarted
I'm sure there is a simpler way, but I wanted DBs of a certain size and a
data OSD on the NVMe as well. I wrote a script to create all the VGs and
LVs of the sizes that I wanted then added this to my Ansible inventory (I
prefer to have as much config in the inventory rather than scattered
throughou
Sorry for the typo: must be journalctl -f instead of syslogctl -f.-gw
On Tue, 2020-04-28 at 19:12 +, Gert Wieberdink wrote:
> Hello Simon,ceph-mgr and dashboard installation should
> bestraightforward.These are tough ones (internal server error 500).
> Did you create a selfsigned cert for dash
Hello Simon,ceph-mgr and dashboard installation should be
straightforward.
These are tough ones (internal server error 500). Did you create a self
signed cert for dashboard?Did you check firewalld (port 8443) and/or
SELinux? Does syslogctl -f show anything?
rgds,-gw
On Tue, 2020-04-28 at 12:17 +000
In the Nautilus manual it recommends >= 4.14 kernel for multiple active
MDSes. What are the potential issues for running the 4.4 kernel with
multiple MDSes? We are in the process of upgrading the clients, but at
times overrun the capacity of a single MDS server.
MULTIPLE ACTIVE METADATA SERVERS
Im prettty sure that you got the same issue than we already reported :
https://tracker.ceph.com/issues/43756
Garbage and garbage stored into our OSD without be able to cleanup wasting a
lot of space.
As you can see its solved in the new versions but... the last versión didnt
have any "scrub" o
HI Igor,
but the performance issue is still present even on the recreated OSD.
# ceph tell osd.38 bench -f plain 12288000 4096
bench: wrote 12 MiB in blocks of 4 KiB in 1.63389 sec at 7.2 MiB/sec
1.84k IOPS
vs.
# ceph tell osd.10 bench -f plain 12288000 4096
bench: wrote 12 MiB in blocks of 4 K
Hi Igore,
Am 27.04.20 um 15:03 schrieb Igor Fedotov:
> Just left a comment at https://tracker.ceph.com/issues/44509
>
> Generally bdev-new-db performs no migration, RocksDB might eventually do
> that but no guarantee it moves everything.
>
> One should use bluefs-bdev-migrate to do actual migrati
Short update - please treat bluefs_sync_write parameter instead of
bdev-aio. Changing the latter isn't supported in fact.
On 4/28/2020 7:35 PM, Igor Fedotov wrote:
Francious,
here are some observations got from your log.
1) Rocksdb reports error on the following .sst file:
-35> 2020-04-2
Excellent analysis Igor!
Mark
On 4/28/20 11:35 AM, Igor Fedotov wrote:
Francious,
here are some observations got from your log.
1) Rocksdb reports error on the following .sst file:
-35> 2020-04-28 15:23:47.612 7f4856e82a80 -1 rocksdb: Corruption:
Bad table magic number: expected 986351
On 4/28/20 2:21 AM, Simone Lazzaris wrote:
> In data lunedì 27 aprile 2020 18:46:09 CEST, Mike Christie ha scritto:
>
>
>
> [snip]
>
>
>
>> Are you using the ceph-iscsi tools with tcmu-runner or did you setup
>
>> tcmu-runner directly with targetcli?
>
>>
>
> I followed this guide:
> htt
Hi Katarzyna,
Incomplete multipart uploads are not considered orphans.
With respect to the 404s…. Which version of ceph are you running? What tooling
are you using to list and cancel? Can you provide a console transcript of the
listing and cancelling?
Thanks,
Eric
--
J. Eric Ivancich
he / h
Francious,
here are some observations got from your log.
1) Rocksdb reports error on the following .sst file:
-35> 2020-04-28 15:23:47.612 7f4856e82a80 -1 rocksdb: Corruption:
Bad table magic number: expected 986351839
0377041911, found 12950032858166034944 in db/068269.sst
2) which relat
Here is the output of ceph-bluestore-tool bluefs-bdev-sizes
inferring bluefs devices from bluestore path
slot 1 /var/lib/ceph/osd/ceph-5/block -> /dev/dm-17
1 : device size 0x746c000 : own 0x[37e1eb0~4a8290] =
0x4a8290 : using 0x5bc78(23 GiB)
the result of the debug-bluest
Hello,
Yes I upgraded the system to Centos8 and now I can install the dashboard module.
But the problem now is, I cannot log in to the dashboard.
I deleted every cached file on my end and reinstalled the mgr and dashboard
several times.
If I try to log in with a wrong password, it tells me th
Hi Francois,
Could you please share OSD startup log with debug-bluestore (and
debug-bluefs) set to 20.
Also please run ceph-bluestore-tool's bluefs-bdev-sizes command and
share the output.
Thanks,
Igor
On 4/28/2020 12:55 AM, Francois Legrand wrote:
Hi all,
*** Short version ***
Is the
Hi Szabo,
Per-bucket sync with improved AWS compatibility was added in Octopus.
regards,
Matt
On Mon, Apr 27, 2020 at 11:18 PM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> is there a way to synchronize a specific bucket by Ceph across the available
> datacenters?
> I've just found multi site setu
Hi,
I've tried to create ceph luminous cluster for testing porpuses with
ceph-ansible on my 3 machines hyperv vms, but I've got the below error with the
following with the following osd configuration:
---
dummy:
osd_scenario: lvm
lvm_volumes:
- data: osd1lv
data_vg: osd1
db: journal_l
Hello,
running Ceph Nautilus 14.2.4, we encountered this documented dynamic resharding
issue:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-November/037531.html
We disabled dynamic resharding in the configuration, and attempted to reshard
to 1 shard:
# radosgw-admin reshard add --buc
This is out dated but will get you throuh it (especially the pools and
civetweb)
yum install ceph-radosgw
ceph osd pool create default.rgw 8
ceph osd pool create default.rgw.meta 8
ceph osd pool create default.rgw.control 8
ceph osd pool create default.rgw.log 8
ceph osd pool create .rgw.root
CAn you please display your keyring you use in the radosgw containers and also
the ceph config? Seems like authentication issue or your containers don't pick
up your ceph config?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an e
Hello Andrei,
I have kinda the same problem but because it's production and i don't want to
do sudden moves that will have data redistribution and affect clients ( only
with change approval and stuff) but from what i played into other test
clusters and according to documentation... you need to
You can check the lock lists on each rbd and you can try removing the lock but
only when the vm is shutdown and rbd is not used
rbd lock list pool/volume-id
rbd lock rm pool/volume-id "lock_id" client_id
This was a bug in luminous upgrade i believe and i found it back in the days
from this arti
In data lunedì 27 aprile 2020 18:46:09 CEST, Mike Christie ha scritto:
[snip]
> Are you using the ceph-iscsi tools with tcmu-runner or did you setup
> tcmu-runner directly with targetcli?
>
I followed this guide:
https://docs.ceph.com/docs/master//rbd/iscsi-target-cli/[1] and
configured the ta
29 matches
Mail list logo