Not using cephadm, I would also question other things like:
- If it uses docker and docker daemon fails what happens to you containers?
- I assume the ceph-osd containers need linux capability sysadmin. So if you
have to allow this via your OC, all your tasks have potentially access to this
perm
I am seeing a huge usage of ram, while my bucket delete is churning
over left-over multiparts, and while I realize there are *many* being
done a 1000 at a time, like this:
2021-06-03 07:29:06.408 7f9b7f633240 0 abort_bucket_multiparts
WARNING : aborted 254000 incomplete multipart uploads
..my fi
Podman containers will not restart due to restart or failure of centralized
podman daemon. Container is not synonymous to Docker. This thread reminds
me systemd haters threads more and more by I guess it is fine.
On Thu, Jun 3, 2021, 2:16 AM Marc wrote:
> Not using cephadm, I would also questi
Hi folks,
I'm fine with dropping Filestore in the R release!
Only one thing to add is: please add a warning to all versions we can
upgrade from to the R release son not only Quincy but also pacific!
Thanks,
Ansgar
Neha Ojha schrieb am Di., 1. Juni 2021, 21:24:
> Hello everyone,
>
> Given that
Dera Sasha and for everyone else as well,
Sasha Litvak writes:
> Podman containers will not restart due to restart or failure of centralized
> podman daemon. Container is not synonymous to Docker. This thread reminds
> me systemd haters threads more and more by I guess it is fine.
calling p
On 2-6-2021 21:56, Neha Ojha wrote:
On Wed, Jun 2, 2021 at 12:31 PM Willem Jan Withagen wrote:
On 1-6-2021 21:24, Neha Ojha wrote:
Hello everyone,
Given that BlueStore has been the default and more widely used
objectstore since quite some time, we would like to understand whether
we can consi
My Cephadm deployment on RHEL8 created a service for each container, complete
with restarts. And on the host, the processes run under the 'ceph' user
account.
The biggest issue I had with running as containers is that the unit.run script
generated runs podman -rm ... with the -rm, the logs a
On Thu, Jun 3, 2021 at 2:34 AM Ansgar Jazdzewski
wrote:
>
> Hi folks,
>
> I'm fine with dropping Filestore in the R release!
> Only one thing to add is: please add a warning to all versions we can upgrade
> from to the R release son not only Quincy but also pacific!
Sure!
- Neha
>
> Thanks,
>
I suspect the behavior of the controller and the behavior of the drive
firmware will end up mattering more than SAS vs SATA. As always it's
best if you can test it first before committing to buying a pile of
them. Historically I have seen SATA drives that have performed well as
far as HDDs go
Dave-
These are just general observations of how SATA drives operate in storage
clusters.
It has been a while since I have run a storage cluster with SATA drives,
but in the past I did notice that SATA drives would drop off the
controllers pretty frequently. Depending on many factors, it may just
I am running the Ceph ansible script to install ceph version Stable-6.0
(Pacific).
When running the sample yml file that was supplied by the github repo it
runs fine up until the "ceph-mon : check if monitor initial keyring already
exists" step. There it will hang for 30-40 minutes before failing.
Agreed. I think oh …. maybe 15-20 years ago there was often a wider difference
between SAS and SATA drives, but with modern queuing etc. my sense is that
there is less of an advantage. Seek and rotational latency I suspect dwarf
interface differences wrt performance. The HBA may be a bigger
FWIW, those guidelines try to be sort of a one-size-fits-all
recommendation that may not apply to your situation. Typically RBD has
pretty low metadata overhead so you can get away with smaller DB
partitions. 4% should easily be enough. If you are running heavy RGW
write workloads with small
In releases before … Pacific I think, there are certain discrete capacities
that DB will actually utilize: the sum of RocksDB levels. Lots of discussion
in the archives. AIUI in those releases, with a 500 GB BlueStore WAL+DB device,
you’ll with default settings only actually use ~~300 GB most
Hello,
I had an OSD drop out a couple days ago. This is 14.2.16, Bluestore, HDD +
NVMe, non-container. The HDD sort of went away. I powered down the node,
reseated the drive, and it came back. However, the OSD won't start.
Systemctl --failed shows that the lvm2 pvscan failed, preventing the OS
Hello,
We're planning another batch of OSD nodes for our cluster. Our prior nodes
have been 8 x 12TB SAS drives plus 500GB NVMe per HDD. Due to market
circumstances and the shortage of drives those 12TB SAS drives are in short
supply.
Our integrator has offered an option of 8 x 14TB SATA drives
Anthony,
I had recently found a reference in the Ceph docs that indicated something
like 40GB per TB for WAL+DB space. For a 12TB HDD that comes out to
480GB. If this is no longer the guideline I'd be glad to save a couple
dollars.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
Mark,
We are running a mix of RGW, RDB, and CephFS. Our CephFS is pretty big,
but we're moving a lot of it to RGW. What prompted me to go looking for a
guideline was a high frequency of Spillover warnings as our cluster filled
up past the 50% mark. That was with 14.2.9, I think. I understand t
18 matches
Mail list logo