On 6/22/25 18:25, Hector Martin wrote:
On 2025/06/23 0:21, Anthony D'Atri wrote:
DIMMs are cheap.
No DIMMs on Apple Macs.
You’re running virtualized in VMs or containers, with OSDs, mons, mgr, and the
constellation of other daemons with resources dramatically below
recommendations. I’ll speculate that at least the HDDs are USB-attached, or
perhaps you’re on an old cheese-grater?
No, I'm running on bare metal. It's kind of the project I started a few
years ago and everything: https://asahilinux.org/
Yes, the HDDs are USB-attached, and me running this Ceph workload has
led directly to finding and fixing many years-old Linux kernel USB
driver bugs (affecting all platforms, not just funny ones like this
one), and even discovering others that haven't been tracked down yet but
we're currently debugging. If I hadn't run this "strange" workload,
those bugs would have not been found and fixed.
I've also helped track down and fix broken Ceph stuff in Fedora as part
of all this, but I'm sure you'll say Fedora is also an unsupported
distribution.
In fact, I even found a *GCC 13 regression* that miscompiled Ceph with
this whole experiment:
https://tracker.ceph.com/issues/63867
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113359
Should I give up running my "unsupported configuration" and stop finding
fixing all these bugs that affect lots of other configurations and
deployments of Ceph and non-Ceph things?
Please not, if you ask me. If we want to learn from other communities,
the OpenBSD project supports many different architectures / platforms
which has helped them to uncover machine-independent bugs that might
affect other systems under the "right" conditions. Fixing those helps
the entire ecosystem.
Ceph is considered the "Swiss army knife of storage". To try to advance
beyond what is currently deemed possible with Ceph, we need to push its
boundaries.
For some users / use cases this might be to have Ceph scale to
ever-larger clusters. But for others this might be to try make Ceph work
on ever-smaller hardware. And both might be useful, and benefit each
other. Work is done to make Ceph more CPU efficient (CRIMSON project).
This work would allow lower end systems to make use of Ceph where they
might currently not. Not sure if memory efficiency is also part of their
goals, but I'm sure it will be taken into consideration (to allow for
drop-in replacement). It also helps to run bigger and faster on large
systems (as it is their main goal).
Expanding the range of applications for Ceph storage systems will
contribute to the growth of the community, which benefits everyone involved.
My 2 cents,
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io