Hello list,
we have found that the active mgr process in our 3-node CEPH cluster
takes a lot of memory. After start the memory usage is constantly
increasing. After 6 days the process takes ~67GB:
~# ps -p 7371 -o rss,%mem,cmd
RSS %MEM CMD
71053880 26.9 /usr/bin/ceph-mgr -n mgr.hostname.nvw
Hello list,
forget to mention the CEPH version:
17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5)
quincy (stable)
Greetings
Tobias
smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Hi Patrick,
On 5/22/23 22:00, Patrick Donnelly wrote:
Hi Conrad,
On Wed, May 17, 2023 at 2:41 PM Conrad Hoffmann wrote:
On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've
Hi,
there was a change introduced [1] for cephadm to use dashes for
container names instead of dots. That still seems to be an issue
somehow, in your case cephadm is complaining about the missing
directory:
/var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.run
Hi,
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time? Is
there anything to consider when changing, let's say, from 1TB (default)
to 4TB ?
We are running the latest Nautilus release
Hi,
there was a thread [1] just a few weeks ago. Which mgr modules are
enabled in your case? Also the mgr caps seem to be relevant here.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/
Zitat von Tobias Hachmer :
Hello list,
we have
Good morning Eugen!
Thank you, this allowed me to succcessfully migrate my OSDs to ports
above 6830. This in turn prevents the conflict with slurmd.
Cordially,
Renata.
On 5/18/23 18:26, Eugen Block wrote:
Hi,
the config options you mention should work, but not in the ceph.conf.
You shoul
Hi Eugen,
Am 5/23/23 um 12:50 schrieb Eugen Block:
there was a thread [1] just a few weeks ago. Which mgr modules are
enabled in your case? Also the mgr caps seem to be relevant here.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/BKP6EVZZHJMYG54ZW64YABYV6RLPZNQO/
than
I found that I could get much more aggressive with the
osd_mclock_cost_per_byte_usec_hdd values if my
osd_mclock_max_capacity_iops_hdd numbers were set to sane values. If I used
the (very wrong) default calculated values I would start getting slow ops.
Setting all hdd in my environment to 3
Hi,
> On 23 May 2023, at 13:27, Dietmar Rieder wrote:
>
> can the cephfs "max_file_size" setting be changed at any point in the
> lifetime of a cephfs?
> Or is it critical for existing data if it is changed after some time? Is
> there anything to consider when changing, let's say, from 1TB (de
On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder
wrote:
>
> Hi,
>
> can the cephfs "max_file_size" setting be changed at any point in the
> lifetime of a cephfs?
> Or is it critical for existing data if it is changed after some time? Is
> there anything to consider when changing, let's say, from 1TB
There are tags for placement, so any user that has the tag is allowed to access
the placement.
But I can't find how to do it for storage class?
Anyone have any ideal? Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
Hi Venky,
thank you for your help. We managed to shut down mds.1:
We set "ceph fs set max_mds 1" and waited for about 30 minutes. In the first
couple minutes, strays were migrated from mds.1 to mds.0. After this, the stray
export hung. The mds.1 remained in the state_stopping. After about 30 min
In addition, i would like to mention that the number of "strays_created"
also increases after this action, but the number of num_strays is lower
now. If desired, we can provide debug logs from mds at the time the mds
was in stopping state and we did a systemctl restart mds1.
The only active md
On 23.05.23 08:42, huxia...@horebdata.cn wrote:
Indeed, the question is on server-side encryption with keys managed by ceph on
a per-user basis
What kind of security to you want to achieve with encryption keys stored
on the server side?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux:
Hello Igor,
just reporting, that since last restart (after reverting changed values
to their defaults) the performance hasn't decreased (and it's been over
two weeks now). So either it helped after all, or the drop is caused
by something else I'll yet have to figure out.. we've automated the test
Hi Nikola,
Just to be clear, these were the settings that you changed back to the
defaults?
Non-default settings are:
"bluestore_cache_size_hdd": {
"default": "1073741824",
"mon": "4294967296",
"final": "4294967296"
},
"bluestore_ca
Dear All,
After a unsuccessful upgrade to pacific, MDS were offline and could not get
back on. Checked the MDS log and found below. See cluster info from below as
well. Appreciate it if anyone can point me to the right direction. Thanks.
MDS log:
2023-05-24T06:21:36.831+1000 7efe56e7d700 1 m
On Tue, May 23, 2023 at 1:55 PM Justin Li wrote:
>
> Dear All,
>
> After a unsuccessful upgrade to pacific, MDS were offline and could not get
> back on. Checked the MDS log and found below. See cluster info from below as
> well. Appreciate it if anyone can point me to the right direction. Thank
Thanks for replying, Greg. I'll give you a detailed sequence I did on the
upgrade at below.
Step 1: upgrade ceph mgr and Monitor --- reboot. Then mgr and mon are all up
running.
Step 2: upgrade one OSD node --- reboot and OSDs are all up.
Step 3: upgrade a second OSD node named OSD-node2. I did
Hello Justin,
On Tue, May 23, 2023 at 4:55 PM Justin Li wrote:
>
> Dear All,
>
> After a unsuccessful upgrade to pacific, MDS were offline and could not get
> back on. Checked the MDS log and found below. See cluster info from below as
> well. Appreciate it if anyone can point me to the right d
Thanks Patrick. We're making progress! After issuing below cmd (ceph config)
you gave me, ceph cluster health shows HEALTH_WARN and mds is back up. However,
cephfs can't be mounted showing below error. Ceph mgr portal also show 500
internal error when I try to browse the cephfs folder. I'll be u
Sorry Patrick, last email was restricted as attachment size. I attached a link
for you to download the log. Thanks.
https://drive.google.com/drive/folders/1bV_X7vyma_-gTfLrPnEV27QzsdmgyK4g?usp=sharing
Justin Li
Senior Technical Officer
School of Information Technology
Faculty of Science, Enginee
Hi Patrick,
Sorry for keeping bothering you but I found that MDS service kept crashing even
cluster shows MDS is up. I attached another log of MDS server - eowyn at below.
Look forward to hearing more insights. Thanks a lot.
https://drive.google.com/file/d/1nD_Ks7fNGQp0GE5Q_x8M57HldYurPhuN/view
Hey all,
im facing a "minor" problem.
I do not always get results when going to the dashboard, under
Block->Images in the tab Images or Namespaces. The little refresh button
will keep spinning and sometimes after several minutes it will finally
show something. That is odd, because from the sh
On 5/23/23 15:53, Konstantin Shalygin wrote:
Hi,
On 23 May 2023, at 13:27, Dietmar Rieder
wrote:
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time?
Is there anything to consider w
On 5/23/23 15:58, Gregory Farnum wrote:
On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder
wrote:
Hi,
can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time? Is
there anything to consider whe
27 matches
Mail list logo