Hello.
I'm currently verifying the behavior of RBD on failure. I'm wondering
about the consistency of RBD images after network failures. As a
result of my investigation, I found that RBD sets a Watcher to RBD
image if a client mounts this volume to prevent multiple mounts. In
addition, I found tha
Hmm... seems I might have been blinded and looking in the wrong place.
I did some scripting and took a look at all the *. objects'
"parent" xattrs on the pool. Nothing funky there and no files with a
backtrace pointing to that deleted folder. No considerable amount of these
inode object "s
Hi all,
The User + Dev Meetup will be held tomorrow at 10:00 AM EDT. We will be
discussing the results of the latest survey, and users who attend will have
the opportunity to provide additional feedback in real time.
See you there!
Laura Flores
Meeting Details:
https://www.meetup.com/ceph-user-g
Hi,
On 22/05/2024 12:44, Eugen Block wrote:
you can specify the entire tree in the location statement, if you need to:
[snip]
Brilliant, that's just the ticket, thank you :)
This should be made a bit clearer in the docs [0], I added Zac.
I've opened a MR to update the docs, I hope it's a
Hi Iain,
Can you check if it relates to this? --
https://tracker.ceph.com/issues/63373
There is a bug when bulk deleting objects, causing the RGWs to deadlock.
Cheers,
Enrico
On 5/17/24 11:24, Iain Stott wrote:
Hi,
We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6 an
Hi,
you can specify the entire tree in the location statement, if you need to:
ceph:~ # cat host-spec.yaml
service_type: host
hostname: ceph
addr:
location:
root: default
rack: rack2
and after the bootstrap it looks like expected:
ceph:~ # ceph osd tree
ID CLASS WEIGHT TYPE NAME
Hi Stefan,
ahh OK, misunderstood your e-mail. It sounded like it was a custom profile, not
a standard one shipped with tuned.
Thanks for the clarification!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Stefan Bauer
Sent: We
Hi Frank,
it's pretty straightforward. Just follow the steps:
apt install tuned
tuned-adm profile network-latency
According to [1]:
network-latency
A server profile focused on lowering network latency.
This profile favors performance over power savings by setting
|intel_pstate| and |
Hi Stefan,
can you provide a link to or copy of the contents of the tuned-profile so
others can also profit from it?
Thanks!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Stefan Bauer
Sent: Wednesday, May 22, 2024 10:51 AM
Hi Anthony and others,
thank you for your reply. To be honest, I'm not even looking for a
solution, i just wanted to ask if latency affects the performance at all
in my case and how others handle this ;)
One of our partners delivered a solution with a latency-optimized
profile for tuned-dae
I have already installed multiple one node ceph cluster with cephfs for
non-productive workloads in the last few years.
Had no major issue, e.g. once a broken HDD. The question is what kind of EC
or replication you will use. Also only powered off the node in a clean and
healthy state ;-)
What woul
11 matches
Mail list logo