On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl wrote:
>
> Hi,
>
> I am running a Ceph 16.2.9 cluster with wire encryption. From my ceph.conf:
> _
> ms client mode = secure
> ms cluster mode = secure
> ms mon client mode = secure
> ms mon cluster mode = secure
> ms mon service mode = s
If you try shuffling some daemon around on some of the working hosts (e.g.
changing the placement of the node-exporter spec so that one of the working
hosts is excluded so the node-exporter there should be removed) is
cephadm able to actually complete that? Also, does device info for any or
all of
Okay, the fact that the removal is also not working means that idea of it
being "stuck" in some way is likely correct. The most likely culprit in
these scenarios in the past, as mentioned previously, are hanging
ceph-volume commands. Maybe going to each of these new hosts and running
something like
Hey Folks,
I have recently joined the Ceph user group. I work for Twitter in their
Storage Infrastructure group. We run our infrastructure on-prem. We are
looking at credible alternatives to AWS EBS(Elastic Block Storage) on-prem.
We want to run our OLTP databases with remotely mounted drives. My
Hi everyone,
We sent an earlier inquiry on this topic asking how many people are using
bluestore_compression_mode, but now, we would like to know about users'
experience in a more general sense. *Do you currently have
bluestore_compression_mode enabled? Have you tried enabling it in the past?
Have
Hello there,
is anybody sharing his ceph filesystem via samba to windows clients and willing
to share his experience as well as settings in smb.conf and ceph.conf which
have performance impacts?
We are running this setup for years now, but i think there is still room for
improvement and learn
Hi Laura,
We have used pool compression in the past and found it to work well.
We had it on 4/2 EC pool and found data ended up near 1:1 pool:raw.
We were storing backup data in this cephfs pool, however we changed
the backup product and as the data is now encrypted at rest by the
application the b
Hi Frank,
Thank you for the incredibly detailed reply! Will respond inline.
On 8/17/22 7:06 AM, Frank Schilder wrote:
Hi Mark,
please find below a detailed report with data and observations from our
production
system. The ceph version is mimic-latest and some ways of configuring
compressi
Hi,
As I described in another mail(*1), my development Ceph cluster was
corrupted when using problematic binary.
When I upgraded to v16.2.7 + some patches (*2) + PR#45963 patch,
unfound pgs and inconsistent pgs appeared. In the end, I deleted this cluster.
pacific: bluestore: set upper and lowe
I've used RBD for Openstack clouds from small to large scale since 2015.
Been through many upgrades and done many stupid things, and it's still rock
solid. It's the most reliable part of Ceph, I'd say.
On Fri, Aug 19, 2022 at 3:47 AM Abhishek Maloo wrote:
> Hey Folks,
> I have recently joined t
I agree with others who have described RBD as rock solid.
Lots of people use RBD, especially for virtualization. DigitalOcean’s and
Vultr's block service is Ceph, for example, and lots of OpenStack Cinder
deployments. Not an EBS replacement as such because AWS isn’t being used in
the first pl
11 matches
Mail list logo