I believe there is some of problem in the systemd as the ceph starts
successfully by running manually using the ceph-osd command.
On Thu, Apr 8, 2021, 10:32 AM Enrico Kern
wrote:
> I agree. But why does the process start manual without systemd which
> obviously has nothing to do with uid/gid 167
I agree. But why does the process start manual without systemd which
obviously has nothing to do with uid/gid 167 ? It is also not really a fix
to let all users change uid/gids...
On Wed, Apr 7, 2021 at 7:39 PM Wladimir Mutel wrote:
> Could there be more smooth migration? On my Ubuntu I have the
Hello,
Pardon if this has been asked, but I'm just getting started with Rados
Gateway. I looked around for some hints about performance tuning and found
a reference to setting rgw_max_chunk_size = 4M. I suspect the material was
written during Jewel or earlier, so I'm wondering about the best pra
Hi Seba,
The RGW HA mode is still buggy, and is getting reworked. I'm hoping
we'll have it sorted by the .2 release or so. In the meantime, you
can configure haproxy and/or keepalived yourself or use whatever other
load balancer you'd like...
s
On Sat, Apr 3, 2021 at 9:39 PM Seba chanel wrote
You would normally tell cephadm to deploy another mgr with 'ceph orch
apply mgr 2'. In this case, the default placement policy for mgrs is
already either 2 or 3, though--the problem is that you only have 1
host in your cluster, and cephadm currently doesn't handle placing
multiple mgrs on a single
Could there be more smooth migration? On my Ubuntu I have the same behavior and
my ceph uid/gud are also 64045.
I started with Luminous in 2018 when it was not containerized, and still
continue updating it with apt.
Since when we have got this hardcoded value of 167 ???
Andrew Walker-Brown wrot
Now that the hybrid allocator appears to be enabled by default in
Octopus, is it safe to change bluestore_min_alloc_size_hdd to 4k from
64k on Octopus 15.2.10 clusters, and then redeploy every OSD to switch
to the smaller allocation size, without massive performance impact for
RBD? We're seeing a l
Can you share the output of 'ceph log last cephadm'? I'm wondering if
you are hitting https://tracker.ceph.com/issues/50114
Thanks!
s
On Mon, Apr 5, 2021 at 4:00 AM Peter Childs wrote:
>
> I am attempting to upgrade a Ceph Upgrade cluster that was deployed with
> Octopus 15.2.8 and upgraded to