Janne,
Thanks for your reply. To reduce the cost of recovering OSDs while
WAL/DB device is down, maybe I have no
choice but add more WAL/DB devices.
On 2023/3/15 15:04, Janne Johansson wrote:
hi, everyone,
I have a question about repairing the broken WAL/DB device.
I have a cluster with 8
Dear Ceph Team,
I hope this email finds you well. I am writing to express my keen interest
in participating in the Google Summer of Code (GSoC) program 2023 with your
team.
I am a 3rd year B.tech student in Computer Science Engineering, with a
strong passion for [specific area of interest related
Hi all
I want to study the effect of bluestore rocksdb compression on ceph and whether
it is necessary to optimize it. But currently, bluestore rocksdb compression is
disabled by default in ceph.
I simply replaced the rocksdb compression algorithm, and then performed a 4KB
rand read fio test,
With CentOS/Rocky 7-8 I’ve observed unexpected usage of swap when there is
plenty of physmem available.
Swap IMHO is a relic of a time when RAM capacities were lower and much more
expensive.
In years beginning with a 2, and with Ceph explicitly, I assert that swap
should never be enabled duri
Thank you both for your help!
I got 17.2.5 running now.
I still had one mgr on 16.2.11 with that the orch module was runnung and i was
able to set the `cephadm` backend on the webinterface.
Then i directly upgraded to 17.2.5, during the upgrade it seemed to have paused
again but after disablin
ah.. ok, it was not clear to me that skipping minor version when doing a major
upgrade was supported.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
The unit file tells me:
```
# cat
/var/lib/ceph/6d0ecf22-9155-4684-971a-2f6cde8628c8/mgr.pamir.ajvbug/unit.run
set -e
/usr/bin/install -d -m0770 -o 167 -g 167
/var/run/ceph/6d0ecf22-9155-4684-971a-2f6cde8628c8
# mgr.pamir.ajvbug
! /usr/bin/podman rm -f
ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-
I ended up in the same situation while playing around with a test cluster. The
SUSE team has an article [1] for this case, the following helped me resolve
this issue. I had three different osd specs in place for the same three nodes:
osd 33w nautilus2;nautilus
hi , all
I m using rgw multisite with ceph 17.2.5 deployed with rook.
A number of bucket.sync-status mdlogs with names of buckets deleted during
maintenance were found.
test env )
bash-4.4$ rados -p master.rgw.log ls | grep bucket.sync-status | grep test1
bucket.sync-status.a788ebed-10a9-48da-
Hello,
We have a 6-node ceph cluster, all of them have osd running and 3 of them
(ceph-1 to ceph-3 )also has the ceph-mgr and ceph-mon. Here is the detailed
configuration of each node (swap on ceph-1 to ceph-3 has been disabled after
the alarm):
# ceph-1 free -h
total
ceph pacific 16.2.11 (cephadm managed)
I have configured some NFS mounts from the ceph GUI from cephfs. We can mount
the filesystems and view file/directory listings, but cannot read any file data.
The permissions on the shares are RW. We mount from the client using
"vers=4.1".
Looking at de
Hi everyone,
To help with costs for Cephalocon Amsterdam 2023, we wanted to see if anyone
would like to volunteer to help with photography for the event. A group of
people would be ideal so that we have good coverage in the expo hall and
sessions.
If you're interested, please reply to me direct
Hello ceph-users,
unhappy with the capabilities in regards to bucket access policies when
using the Keystone authentication module
I posted to this ML a while back -
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/S2TV7GVFJTWPYA6NVRXDL2JXYUIQGMIN/
In general I'd still like to
Hi,
I could not confirm this in a virtual lab cluster, also on 17.2.5:
host1:~ # ceph osd pool create asdf
pool 'asdf' created
host1:~ # ceph-conf -D | grep 'osd_pool_default_pg'
osd_pool_default_pg_autoscale_mode = on
osd_pool_default_pg_num = 32
osd_pool_default_pgp_num = 0
So it looks quite
Hi Alex,
thanks a lot for your reply. This sounds interesting. I'm using a prebuilt
version deployed via cephadm, it's using some containers pulled from quay.io.
Right now, I'm too scared to build his myself for the production system...
Thinking about setting up a VM which mounts the cephfs via
Hey Patrick,
I had a somewhat similar issue but with FSAL_RGW.
On the NFS-side, I noticed missing files. There were no errors in the
logfiles whatsoever.
Solution was to rebuild nfs-ganesha with the most recent libceph-dev at
that time.
Are you using pre-built ganesha binaries or did you com
Hi,
today I saw a strange situation where files which were copied to a cephfs via
Ganesha NFS (deployed via cephadm) disappeared from the NFS directory and then
did not show up anymore until I restarted the ganesha instance. This could be
observed on different NFS client hosts. While the files
Aaaah many many thanks for the information rich! It helped a lot!
Indeed the rados operations documentation lack a bunch of advanced commands
explanations.
I’ve finally find the exact and explicit command explanation in here for
the record: https://docs.ceph.com/en/latest/radosgw/layout/#metadata
Hi Ashu,
are you talking about the kernel client? I can't find "stripe size" anywhere in
its mount-documentation. Could you possibly post exactly what you did? Mount
fstab line, config setting?
Thanks!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
__
Sorry, my replies from the webinterface didn't yet went trough.
Thank you both! With your help i was able to get it up and running on 17.2.5.
Yours,
bbk
On Tue, 2023-03-14 at 09:44 -0400, Adam King wrote:
> That's very odd, I haven't seen this before. What container image is the
> upgraded mgr
> hi, everyone,
> I have a question about repairing the broken WAL/DB device.
>
> I have a cluster with 8 OSDs, and 4 WAL/DB devices(1 OSD per WAL/DB
> device), and hwo can I repair the OSDs quickly if
>
> one WAL/DB device breaks down without rebuilding the them? Thanks.
I think this is one of
21 matches
Mail list logo