Hi Robert,
On Wed, Jan 18, 2023 at 2:43 PM Robert Sander
wrote:
>
> Hi,
>
> I have a healthy (test) cluster running 17.2.5:
>
> root@cephtest20:~# ceph status
>cluster:
> id: ba37db20-2b13-11eb-b8a9-871ba11409f6
> health: HEALTH_OK
>
>services:
> mon: 3 daemons,
Hi Ilya,
thanks for the info, it did help. I agree, its the orchestration layer's
responsibility to handle things right. I have a case open already with support
and it looks like there is indeed a bug on that side. I was mainly after a way
that ceph librbd clients could offer a safety net in ca
Does anyone know what could have happened?
Em seg., 16 de jan. de 2023 às 13:44, escreveu:
> Good morning everyone.
>
> On this Thursday night we went through an accident, where they
> accidentally renamed the .data pool of a File System making it instantly
> inaccessible, when renaming it again
Hi Frank,
one thing that might be relevant here: If you disable transparent lock
transitions, you cannot create snapshots of images that are in use in
such a way.
This may or may not be relevant in your case. I'm just mentioning it
because I myself was surprised by that.
Best regards,
And
Hi Andreas,
thanks for that piece of information.
I understand that transient lock migration is important under "normal"
operational conditions. The use case I have in mind is the process of
live-migration, when one might want to do a clean hand-over of a lock between
two librbd clients. Speci
Hi Thomas,
On Tue, Jan 17, 2023 at 5:34 PM Thomas Widhalm
wrote:
>
> Another new thing that just happened:
>
> One of the MDS just crashed out of nowhere.
>
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic
Hi Venky,
Thanks.
I just uploaded my logs to the tracker.
I'll try what you suggested and will let you know how it went.
Cheers,
Thomas
On 19.01.23 14:01, Venky Shankar wrote:
Hi Thomas,
On Tue, Jan 17, 2023 at 5:34 PM Thomas Widhalm
wrote:
Another new thing that just happened:
One of t
Hi,
Unfortunately the workaround didn't work out:
[ceph: root@ceph05 /]# ceph config show mds.mds01.ceph06.hsuhqd | grep
mds_wipe
mds_wipe_sessions true
mon
[ceph: root@ceph05 /]# ceph config show mds.mds01.ceph04.cvdhsx | grep
mds_wipe
mds_wipe_sessions true
I'm running Quincy and my journal fills with messages that I consider
"debug" level such as:
* ceph-mgr[1615]: [volumes INFO mgr_util] scanning for idle connections..
* ceph-mon[1617]: pgmap v1176995: 145 pgs: 145 active+clean; ...
* ceph-mgr[1615]: [dashboard INFO request] ...
* ceph-mgr
Hi,
did you read this thread [1] (or other questions like [2])? Basically,
it's something like this:
ceph config set client.rgw. rgw_keystone_url
https://control.fqdn:5000
Regards,
Eugen
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/CRYFR777VKQLYLXPSNJVBOKZRGXLXFV
Hello Everyone,
i tried to both of the values and try to increase max
20 gb but i did't see any difference in the mirror speed can you please guide
me or help me to tune these value or any other way.
vlaues you i tried :
rbd_mirror_memory_target
rbd_mirror_memory_cache
Dear all,
We have started to use more intensively cephfs for some wlcg related workload.
We have 3 active mds instances spread on 3 servers, mds_cache_memory_limit=12G,
most of the other configs are default ones.
One of them has crashed this night leaving the log below.
Do you have any hint on wh
I'm running Quincy and my journal fills with messages that I consider
"debug" level such as:
* ceph-mgr[1615]: [volumes INFO mgr_util] scanning for idle connections..
* ceph-mon[1617]: pgmap v1176995: 145 pgs: 145 active+clean; ...
* ceph-mgr[1615]: [dashboard INFO request] ...
* ceph-mgr
Hi.
I'm new to ceph, been toying around in a virtual environment (for now) trying
to understand how to manage it. I made 3 vms in proxmox and provisioned a bunch
of virtual drives to each. Bootstrapped following the quincy-branch official
documentation.
These are the drives:
> /dev/sdb 128.00 G
Hi Seccentral,
How did you run that `ceph-volume raw prepare` command exactly? If you ran
it manually from within a separate container, the keyring issue you faced
is expected.
In any case, what you are trying to achieve is not supported by ceph-volume
at the moment but from what I've seen, it do
Hi Thomas,
On Thu, Jan 19, 2023 at 7:15 PM Thomas Widhalm
wrote:
>
> Hi,
>
> Unfortunately the workaround didn't work out:
>
> [ceph: root@ceph05 /]# ceph config show mds.mds01.ceph06.hsuhqd | grep
> mds_wipe
> mds_wipe_sessions true
>
> mon
> [ceph: root@ceph05 /
16 matches
Mail list logo