ng to look into is dmesg at your OSD nodes. If there's a
hardware read error it will be logged in dmesg.
2018-03-05 18:26 GMT+03:00 Marco Baldini - H.S. Amiata
mailto:mbald...@hsamiata.it>>:
Hi and thanks for reply
The OSDs are all healthy, in fact after a ceph pg repair t
or example we
had read errors because of a faulty backplane interface in a server;
changing the chassis resolved this issue.
2018-03-05 14:21 GMT+03:00 Marco Baldini - H.S. Amiata
mailto:mbald...@hsamiata.it>>:
Hi
After some days with debug_osd 5/5 I found [ERR] in different
t the same
error that you are seeing.
Could you post to the tracker issue that you are also seeing this?
Paul
2018-03-05 12:21 GMT+01:00 Marco Baldini - H.S. Amiata
mailto:mbald...@hsamiata.it>>:
Hi
After some days with debug_osd 5/5 I found [ERR] in different
day
ad
candidate had a read error
I don't know what this error is meaning, and as always a ceph pg repair
fixes it. I don't think this is normal.
Ideas?
Thanks
Il 28/02/2018 14:48, Marco Baldini - H.S. Amiata ha scritto:
Hi
I read the bugtracker issue and it seems a lot like my prob
I'll check OSD logs in the next days...
Thanks
Il 28/02/2018 11:59, Paul Emmerich ha scritto:
Hi,
might be http://tracker.ceph.com/issues/22464
Can you check the OSD log file to see if the reported checksum
is 0x6706be76?
Paul
Am 28.02.2018 um 11:43 schrieb Marco Baldini - H.S.
[1~3]
I can't understand where the problem comes from, I don't think it's
hardware, if I had a failed disk, then I should have problems always on
the same OSD. Any ideas
Thanks
--
*Marco Baldini*
*H.S. Amiata Srl*
Ufficio:0577-779396
Cellulare: 335-8765169
W
Il 30/10/2017 10:31, Alwin Antreich ha scritto:
Hello Marco,
On Mon, Oct 23, 2017 at 05:48:10PM +0200, Marco Baldini - H.S. Amiata wrote:
Hello
ceph-mon services do not restart in any node, yesterday I manually restarted
ceph-mon and ceph-mgr on every node and since them they did not
x27;s configuration does not contain either public nor
cluster network. I guess when there is only one there is no point...
Denes.
On 10/23/2017 05:52 PM, Marco Baldini - H.S. Amiata wrote:
Hi
I used the tool pveceph provided with Proxmox to initialize ceph, I
can change but in that case should
+0200, Marco Baldini - H.S. Amiata
wrote:
Thanks for reply
My ceph.conf:
[global]
auth client required = none
auth cluster required = none
auth service required = none
bluestore_block_db_size = 64424509440
*cluster network
Hello
ceph-mon services do not restart in any node, yesterday I manually
restarted ceph-mon and ceph-mgr on every node and since them they did
not restart
*pve-hs-2$ systemctl status ceph-mon@pve-hs-2.service*
ceph-mon@pve-hs-2.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/sy
on 10.10.10.0/24"
This means that the nodes have public and cluster network separately
both on 10.10.10.0/24, or that you did not specify a separate cluster
network?
Please provide route table, ifconfig, ceph.conf
Regards,
Denes
On 10/23/2017 03:35 PM, Marco Baldini - H.S. Amiata wr
d that's
from log of node 10.10.10.252 so it's losing connection with the monitor
on the same node, I don't think it's network related.
I already tried with nodes reboot, ceph-mon and ceph-mgr restart, but
the problem is still there.
Any ideas?
Thanks
--
or 10 disks, should I give 100GB of DB to each OSD? It's
those things people want to know. So we need numbers to figure these things out.
Wido
--
*Marco Baldini*
*H.S. Amiata Srl*
Ufficio:0577-779396
Cellulare: 335-8765169
WEB:www.hsamiata.it <https://www.hsamiata.it&
entirely as db space,
I get results like:
root@vm-hv-01:~# for i in {60..65} ; do echo -n "osd.$i db per object: " ; expr
`ceph daemon osd.$i perf dump | jq '.bluefs.db_used_bytes'` / `ceph daemon osd.$i perf
dump | jq '.bluestore.bluestore_onodes'` ; done
osd.60 d
14 matches
Mail list logo