re is another 11 OSDs and they parformed well.
This is not the first occurence of this problem, when it happened
the first time, we tried to reload whole server, but then we
found, that reload of OSD container is enough...
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Acade
I've found
/var/lib/ceph/crash directories, I attach to this message
files, which I've found here.
Please, can you advice, what now I can do? It seems, that rocksdb
is even non-compatible or corrupted :-(
Thanks in advance.
Sincerely
Jan Marek
--
Ing. Jan Marek
University of S
bzip2 format, how I can share it with you?
It contains crash log from start osd.1 too, but I can cut out
from it and send it to list...
Sincerely
Jan Marek
Dne Čt, led 04, 2024 at 02:43:48 CET napsal(a) Jan Marek:
> Hi Igor,
>
> I've ran this oneliner:
>
> for i in {0..12};
settings to fix the issue.
>
> This is reproducible and fixable in my lab this way.
>
> Hope this helps.
>
>
> Thanks,
>
> Igor
>
>
> On 15/01/2024 12:54, Jan Marek wrote:
> > Hi Igor,
> >
> > I've tried to start ceph-sod daemon as you
this PG once and
once again?
And another question: Is scrubbing part of mClock scheduler?
Many thanks for explanation.
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html
Jan 2
problems.
And there was question, if scheduler manage CEPH cluster
background (and clients) operation in this way to stil be usable
for clients.
I've tried to send feedback to developers.
Thanks for understanding.
Sincerely
Jan Marek
Dne St, led 24, 2024 at 11:18:20 CET napsal(a) Peter Grand
Saturday I will change some settings on networking and I will
try to start upgrade process, maybe with --limit=1, to be "soft"
for cluster and for our clients...
> -Sridhar
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +42038903208
client_ops".
When I was stucked in the upgrade process, I had in logs so many
records, see attached file. Since upgrade is complete, this
messages went away... Can be this reason of poor
performance?
Sincerely
Jan Marek
Dne Čt, led 25, 2024 at 02:31:41 CET napsal(a) Jan Marek:
> Hello Sridhar,
&
Hello again,
I'm sorry, I forgot attach file... :-(
Sincerely
Jan
Dne Út, led 30, 2024 at 11:09:44 CET napsal(a) Jan Marek:
> Hello Sridhar,
>
> at Saturday I've finished upgrade proces to 18.2.1.
>
> Cluster is now in HEALTH_OK state and performs well.
>
> A
pe to async+posix and restart
ceph.target, cluster converged to HEALTH_OK state...
Thanks for advices...
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html
2024-02-05T09:56:50.2
Hello,
we've found problem:
In systemd unit for OSD there is missing this line in the
[Service] section:
LimitMEMLOCK=infinity
When I added this line to systemd unit, OSD daemon started and
we have HEALTH_OK state in the cluster status.
Sincerely
Jan Marek
Dne Po, úno 05, 2024 at 11:
but it cannot join to cluster...
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html
___
ceph-users mailing list -- ceph-users@ceph.io
T
dy set
require_osd_release parameter to octopus.
I suggest to use variant 1) and I've sendig attached patch.
There is another question, if MON daemon have to check
require_osd_release, when it is joining to the cluster, when it
cannot raise it's value.
It is potentially dangerous situation,
aised automatically, when every MON
daemons in cluster have this version. Is there a reason
to not raise automatically parameter require-osd-release?
Sincerely
Jan Marek
Dne Pá, říj 07, 2022 at 11:08:52 CEST napsal Dan van der Ster:
> Hi Jan,
>
> It looks like you got into this situation
them a addresses of OSD nodes from
192.168.1.0/24 network, or it will give them addresses randomly?
Please, have someone advice, how to set this networking
optimally?
Thanks a lot.
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
# ceph orch ps
I've not mon.host1 in listing and mon.host1 is in the stray daemons? :-(
And how it can be written in YAML file for ceph orch apply?
Sincerely
Jna Marek
Dne Po, lis 28, 2022 at 02:36:11 CET napsal Jan Marek:
> Hello,
>
> I have a CEPH cluster with 3 MONs and 6 OSD nodes
.
Disabling old systemd unit ceph-osd@12...
Moving data...
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 9468, in
main()
File "/usr/sbin/cephadm", line 9456, in main
r = ctx.func(ctx)
File "/usr/sbin/cephadm", line 2135, in _default_image
6, then I tried 'cephadm adopt' command once more and voila!
It works like a charm.
I will try to configure OSDs on the node 1 to adopt WAL and DB
from prepared LVM... Maybe after upgrade to newer version of
CEPH it will be OK?
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South
vices:
paths:
- /dev/nvme0n1
filter_logic: AND
objectstore: bluestore
Now I have 12 OSD with DB on NVMe device, but without WAL. How I
can add WAL to this OSD?
NVMe device still have 128GB free place.
Thanks a lot.
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Aca
operational? I
believe, it is a better choose, it will. And what if "die" 2/3
location? On this cluster is pool with cephfs - this is a main
part of CEPH.
Many thanks for your notices.
Sincerely
Jan Marek
--
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +42
...)
Sincerely
Jan Marek
Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block:
> Hi,
>
> if you don't specify a different device for WAL it will be automatically
> colocated on the same device as the DB. So you're good with this
> configuration.
>
> Regards,
>
ng 'osd': expected string of the form TYPE.ID, valid types are:
auth, mon, osd, mds, mgr, client\n"
I'm on the host, on which is this OSD 8.
My CEPH version is latest (I hope) quincy: 17.2.6.
Thanks a lot for help.
Sincerely
Jan Marek
>
>
> Zitat von Ja
"wal_used_bytes": 0,
"files_written_wal": 535,
"bytes_written_wal": 121443819520,
"max_bytes_wal": 0,
"alloc_unit_wal": 0,
"read_random_disk_bytes_wal": 0,
"read_disk_bytes_wal&
a.b.7.0/24
monadvanced public_network a.b.7.0/24
...
How to do it safely?
Will be correct to only set:
ceph config set global cluster_network a.b.7.0/24 ?
Have I then restart mon processes and osd processes?
Many thanks for advices.
Sincerely
Jan
--
Ing. Jan Marek
University of
the ceph host prepared by ansible, thus there is
the same environment.
On every machine we have podman version 4.3.1+ds1-8+deb12u1 and
conmon version 2.1.6+ds1-1. OS is Debian bookworm.
Attached logs was prepared by:
grep exec_died /var/log/syslog
Sincerely
Jan Marek
--
Ing. Jan Marek
Uni
ng this CEPH cluster as a storage
for ProxMox virtualization and some virtuals don't "survive" this
situation, as they have not accessible theire "disks" :-(.
Is there a some solution, which we can a try?
Many thanks for advices.
Sincerely
Jan Marek
--
Ing. Jan Marek
Universi
t there is
swapped numerator and denominator of fraction, isn't it?
Sincerely
Jan Marek
--.
Ing. Jan Marek
University of South Bohemia
Academic Computer Centre
Phone: +420389032080
http://www.gnu.org/philosophy/no-word-attachments.cs.html
signature.asc
Description
+recovering+undersized+remapped
This statistics are OK?
Sincerely
Jan Marek
Dne čt, úno 27, 2025 at 10:46:55 CET napsal(a) Jan Marek:
> Hello,
>
> I have a new created ceph cluster, I had some issues with disks,
> and now I have this 'ceph -s' list:
>
28 matches
Mail list logo