Dear all,
I have a problem that after an OSD host lost connection to the
sync/cluster rear network for many hours (the public network was
online), a test VM using RBD cant overwrite its files. I can create a
new file inside it just fine, but not overwrite it, the process just hangs.
The VM's
inous",
"num": 12
}
],
"mgr": [
{
"features": "0x3f01cfbf7ffdffff",
"release": "luminous",
"num": 2
}
]
}
Regards,
Peter
Den 2023-09-29 kl. 17:
"features": "0x3f01cfbf7ffd",
"release": "luminous",
"num": 12
}
],
"mgr": [
{
"features": "0x3f01cfbf7ffd",
"release": "
Dear all,
I have a problem that after an OSD host lost connection to the
sync/cluster rear network for many hours (the public network was
online), a test VM using RBD cant overwrite its files. I can create a
new file inside it just fine, but not overwrite it, the process just hangs.
The VM's
"release": "luminous",
"num": 12
}
],
"mgr": [
{
"features": "0x3f01cfbf7ffd",
"release": "luminous",
"num":
Not really. I'm assuming that they have been working hard at it and I
remember hearing something about a more recent rocksdb version shaving
off significant time. It would also depend on your CPU and memory speed.
I wouldn't be all surprised if latency is lower today, but I havent
really measu
With qd=1 (queue depth?) and a single thread, this isn't totally
unreasonable.
Ceph will have an internal latency of around 1ms or so, add some network
to that and an operation can take 2-3ms. With a single operation in
flight all the time, this means 333-500 operations per second. With
hdds,
That is indeed a lot nicer hardware and 1804 iops is faster, but still
lower than a usd thumb drive.
The thing with ceph is that is scales out really really well, but
scaling up is harder. That is, if you run like 500 of these tests at the
same time, then you can see what it can do.
Some guy
erformance, wouldn't it?
Also, that wouldn't explain why we're seeing a bit of improvement with
size=1 for a specific pool but not a massive improvement, given that
at least half of the latency is taken out of the equation in that case.
Best regards
Martin
Peter Linder schrieb am Di
This may work in order to add RGW to a proxmox-ceph cluster:
https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway
I havent tried it myself yet, but I will when I get some spare time.
There will be no dashboard or anything so be prepared to manage
everything through cli only.
It would m
The balancer is on, that is what triggers new misplaced object whenever
the ratio goes near/below 5%.
You may want to disable it, or by all means let it eventually finish.
Den 2025-01-14 kl. 16:03, skrev Ml Ml:
Hello List,
i have this 3 node Setup with 17 hdds (new - spinning rust).
After p
To get everything up to a working state, you will need to set your
failure domain to "osd" instead of "host" in the default rule, and as it
has been said before, pool size should be 3 and min_size 2.
With that said, you will eventually need more hosts to get the most out
of ceph.
Den 2025-0
You have some whitespace character at the end of the filename, so it
looks like the same name but it is not.
erxxx groupyyy 54370 May 3 16:49 mos2_nscf.in
-rw-r--r-- 1 userxxx groupyyy 2242 May 6 17:16 "mos2_nscf.out"
-rw-r--r-- 1 userxxx groupyyy 1865 May 3 17:28 "mos2_nscf.out "
-rw-r
There is also the issue that if you have a 4+8 EC pool, you ideally need
at least 4+8 of whatever your failure domain is, in this case DCs. This
is more than most people have.
Is this k=4, m=8? What is the benefit of this compared to an ordinary
replicated pool with 3 copies?
Even if you set
14 matches
Mail list logo