some kernels (el7?) lie about being jewel until after they are blocked from
connecting at jewel. then they report newer. Just fyi.
From: Anthony D'Atri
Sent: Tuesday, August 6, 2024 5:08 PM
To: Fabien Sirjean
Cc: ceph-users
Subject: [ceph-users] Re: What'
I thought bluestore stored that stuff in non lvm mode?
From: Robert Sander
Sent: Monday, September 2, 2024 11:35 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not
Accepted
Check twice before you clic
We have a fairly old cluster that has over time been upgraded to nautilus. We
were digging through some things and found 3 bucket indexes without a
corresponding bucket. They should have been deleted but somehow were left
behind. When we try and delete the bucket index, it will not allow it as t
Ping
From: Fox, Kevin M
Sent: Tuesday, December 29, 2020 3:17 PM
To: ceph-users@ceph.io
Subject: [ceph-users] radosgw bucket index issue
We have a fairly old cluster that has over time been upgraded to nautilus. We
were digging through some things and
+1
From: Marc
Sent: Thursday, February 11, 2021 12:09 PM
To: ceph-users
Subject: [ceph-users] ceph osd df results
Check twice before you click! This email originated from outside PNNL.
Should the ceph osd df results not have this result for every device
There are a lot of benefits to containerization that is hard to do without it.
Finer grained ability to allocate resources to services. (This process gets 2g
of ram and 1 cpu)
Security is better where only minimal software is available within the
container so on service compromise its harder to e
The quick answer, is they are optimized for different use cases.
Things like relational databases (mysql, postgresql) benefit from the
performance that a dedicated filesystem can provide (rbd). Shared filesystems
are usually counter indicated with such software.
Shared filesystems like cephfs a
Debating containers vs packages is like debating systemd vs initrd. There are
lots of reasons why containers (and container orchestration) are good for
deploying things, including ceph. Repeating them in each project every time it
comes up is not really productive. I'd recommend looking at why c
While there are many reasons containerization helps, I'll just touch on one
real quick that is relevant to the conversation.
Orchestration.
Implementing orchestration of an entire clustered software with many different:
* package managers
* dependency chains
* init systems
* distro specific quir
Ultimately, that's what a container image is. From the outside, its a
statically linked binary. From the inside, it can be assembled using modular
techniques. The best thing about it, is you can use container scanners and
other techniques to gain a lot of the benefits of that modularity still. P
I've actually had rook-ceph not proceed with something that I would have
continued on with. Turns out I was wrong and it was right. Its checking was
more through then mine. Thought that was pretty cool. It eventually cleared
itself and finished up.
For a large ceph cluster, the orchestration is
I bumped into this recently:
https://samuel.karp.dev/blog/2021/05/running-freebsd-jails-with-containerd-1-5/
:)
Kevin
From: Sage Weil
Sent: Thursday, June 24, 2021 2:06 PM
To: Stefan Kooman
Cc: Nico Schottelius; Kai Börnert; Marc; ceph-users
Subject: [ce
Orchestration is hard, especially with every permutation. The devs have
implemented what they feel is the right solution for their own needs from the
sound of it. The orchestration was made modular to support non containerized
deployment. It just takes someone to step up and implement the permut
https://docs.ceph.com/en/latest/rbd/rbd-openstack/
From: Szabo, Istvan (Agoda)
Sent: Wednesday, June 30, 2021 9:50 AM
To: Ceph Users
Subject: [ceph-users] Ceph connect to openstack
Check twice before you click! This email originated from outside PNNL.
H
I'm not aware of any directly, but I know rook-ceph is used on Kubernetes, and
Kubernetes is sometimes deployed with BGP based SDN layers. So there may be a
few deployments that do it that way.
From: Martin Verges
Sent: Monday, July 5, 2021 11:23 PM
To:
We launch a local registry for cases like these and mirror the relevant
containers there. This keeps copies of the images closer to the target cluster
and reduces load on the public registries. Its not that much different from
mirroring a yum/apt repo locally to speed up access. For large cluste
How do you know if its safe to set `require-min-compat-client=reef` if you have
kernel clients?
Thanks,
Kevin
From: Laura Flores
Sent: Wednesday, May 29, 2024 8:12 AM
To: ceph-users; dev; clt
Cc: Radoslaw Zarzynski; Yuri Weinstein
Subject: [ceph-users] B
Would it cause problems to mix the smartctl exporter along with ceph's built in
monitoring stuff?
Thanks,
Kevin
From: Wyll Ingersoll
Sent: Friday, October 14, 2022 10:48 AM
To: Konstantin Shalygin; John Petrini
Cc: Marc; Paul Mezzanini; ceph-users
Subjec
I haven't done it, but had to read through the documentation a couple months
ago and what I gathered was:
1. if you have a db device specified but no wal device, it will put the wal on
the same volume as the db.
2. the recommendation seems to be to not have a separate volume for db and wal
if on
If its the same issue, I'd check the fragmentation score on the entire cluster
asap. You may have other osds close to the limit and its harder to fix when all
your osds cross the line at once. If you drain this one, it may push the other
ones into the red zone if your too close, making the probl
There should be prom metrics for each.
Thanks,
Kevin
From: Christophe BAILLON
Sent: Monday, November 14, 2022 10:08 AM
To: ceph-users
Subject: [ceph-users] How to monitor growing of db/wal partitions ?
Check twice before you click! This email originated
I think you can do it like:
```
service_type: rgw
service_id: main
service_name: rgw.main
placement:
label: rgwmain
spec:
config:
rgw_keystone_admin_user: swift
```
?
From: Thilo-Alexander Ginkel
Sent: Thursday, November 17, 2022 10:21 AM
To: Case
When we switched (Was using the compat balancer previously), I:
1. turned off the balancer
2. forced the client minimum (new centos7 clients are ok being forced to
luminious even though they report as jewel. There's an email thread elsewhere
describing it)
3. slowly reweighted the crush compat w
if its this:
http://www.acmemicro.com/Product/17848/Kioxia-KCD6XLUL15T3---15-36TB-SSD-NVMe-2-5-inch-15mm-CD6-R-Series-SIE-PCIe-4-0-5500-MB-sec-Read-BiCS-FLASH-TLC-1-DWPD
its listed as 1 DWPD with a 5 year warranty. So should be ok.
Thanks,
Kevin
From: Rob
We went on a couple clusters from ceph-deploy+centos7+nautilus to
cephadm+rocky8+pacific using ELevate as one of the steps. Went through octopus
as well. ELevate wasn't perfect for us either, but was able to get the job
done. Had to test it carefully on the test clusters multiple times to get th
Is there any problem removing the radosgw and all backing pools from a cephadm
managed cluster? Ceph won't become unhappy about it? We have one cluster with a
really old, historical radosgw we think would be better to remove and someday
later, recreate fresh.
Thanks,
Kevin
_
What else is going on? (ceph -s). If there is a lot of data being shuffled
around, it may just be because its waiting for some other actions to complete
first.
Thanks,
Kevin
From: Torkil Svensgaard
Sent: Tuesday, January 10, 2023 2:36 AM
To: ceph-users@
If you have prometheus enabled, the metrics should be in there I think?
Thanks,
Kevin
From: Peter van Heusden
Sent: Thursday, January 12, 2023 6:12 AM
To: ceph-users@ceph.io
Subject: [ceph-users] BlueFS spillover warning gone after upgrade to Quincy
Chec
We successfully did ceph-deploy+octopus+centos7 -> (ceph-deploy
unsupported)+octopus+centos8stream (using leap) -> (ceph-deploy
unsupported)+pacific+centos8stream -> cephadm+pacific+centos8stream
Everything in place. Leap was tested repeatedly till the procedure/sideeffects
were very well know
Minio no longer lets you read / write from the posix side. Only through minio
itself. :(
Haven't found a replacement yet. If you do, please let me know.
Thanks,
Kevin
From: Robert Sander
Sent: Tuesday, February 28, 2023 9:37 AM
To: ceph-users@ceph.io
Su
+1. If I know radosgw on top of cephfs is a thing, I may change some plans. Is
that the planned route?
Thanks,
Kevin
From: Daniel Gryniewicz
Sent: Monday, March 6, 2023 6:21 AM
To: Kai Stian Olstad
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: s3 comp
Will either the file store or the posix/gpfs filter support the underlying
files changing underneath so you can access the files either through s3 or by
other out of band means (smb, nfs, etc)?
Thanks,
Kevin
From: Matt Benjamin
Sent: Monday, March 20, 2
I've seen this twice in production on two separate occasions as well. one osd
gets stuck. a bunch of pg's go into laggy state.
ceph pg dump | grep laggy
shows all the laggy pg's share the same osd.
Restarting the affected osd restored full service.
From
Is this related to https://tracker.ceph.com/issues/58022 ?
We still see run away osds at times, somewhat randomly, that causes runaway
fragmentation issues.
Thanks,
Kevin
From: Igor Fedotov
Sent: Thursday, May 25, 2023 8:29 AM
To: Hector Martin; ceph-us
If you can give me instructions on what you want me to gather before the
restart and after restart I can do it. I have some running away right now.
Thanks,
Kevin
From: Igor Fedotov
Sent: Thursday, May 25, 2023 9:17 AM
To: Fox, Kevin M; Hector Martin
720
if that is the right query, then I'll gather the metrics, restart and gather
some more after and let you know.
Thanks,
Kevin
From: Igor Fedotov
Sent: Thursday, May 25, 2023 9:29 AM
To: Fox, Kevin M; Hector Martin; ceph-users@ceph.io
Subject:
-30T18:35:22.826+ 7fe190013700 0
bluestore(/var/lib/ceph/osd/ceph-183) probe -20: 0, 0, 0
Thanks,
Kevin
From: Fox, Kevin M
Sent: Thursday, May 25, 2023 9:36 AM
To: Igor Fedotov; Hector Martin; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: BlueStore
Does quincy automatically switch existing things to 4k or do you need to do a
new ost to get the 4k size?
Thanks,
Kevin
From: Igor Fedotov
Sent: Wednesday, June 21, 2023 5:56 AM
To: Carsten Grommel; ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Pacif
That is super useful. Thank you so much for sharing! :)
Kevin
From: Frank Schilder
Sent: Friday, October 25, 2024 8:03 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: pgs not deep-scrubbed in time and pgs not scrubbed in
time
Check twice before you
bulk is about tuning the pool assuming it will be closeish to full, not based
on what utilization currently is. That way, as you add a bunch of data, it
isn't constantly adding more pg's to the pool.
From: Anthony D'Atri
Sent: Tuesday, January 14, 2025 9
One of the referenced issues seems to indicate it may be in quincy too? does
anyone know if thats true, and when it was introduced? Does it go back even
farther?
Thanks,
Kevin
From: Devender Singh
Sent: Thursday, May 1, 2025 11:40 AM
To: Alex
Cc: Dan va
my 2 cents,
for the enterprise, there is nothing better then being able to have codified
all the best practices around the proper orchestration of something as
complicated as a ceph cluster into an orchestration layer and done so in a way
that can help prevent mistakes. Without the orchestrator
42 matches
Mail list logo