After going through the kernel code, the 'rasize' option seem never
making much sense any more after netfs and
'ceph_netfs_expand_readahead()' were introduced to kernel ceph. The ceph
client will try to expand the readahead windows twice and the second
time it will always align the len down and
Murilo,
The latency of HDD is about 10ms+. and the IO stack in Ceph may spends
~3ms+,
so the test result is still in doubt. I guess the rbd test used the ram
cache.
You can paste more fio outputs here.
On 2023/3/17 07:16, Murilo Morais wrote:
Good evening everyone!
Guys, what to expect
Ashu,
BTW, have you tried to set 'rasize' option to a small size instead of 0
? Won't this work ?
Thanks
On 15/03/2023 02:23, Ashu Pachauri wrote:
Got the answer to my own question; posting here if someone else
encounters the same problem. The issue is that the default stripe size in a
cephf
Besides Jitsi, another option would be BigBlueButton(BBB). Does anyone know how
BBB compares with Jitsi?
huxia...@horebdata.cn
From: Mike Perez
Date: 2023-03-16 21:54
To: ceph-users
Subject: [ceph-users] Moving From BlueJeans to Jitsi for Ceph meetings
Hi everyone,
We have been using BlueJ
Hi,
tracker.ceph.com seems to be quite slow recently. Since my colleagues
feel so as well,
this problem wouldn't be specific to me.
Could you tel me if there is a plan to fix this problem near future?
Thanks,
Satoru
___
ceph-users mailing list -- ceph-
Good evening everyone!
Guys, what to expect latency for RBD images in a cluster with only HDD (36
HDDs)?
Sometimes I see that the write latency is around 2-5 ms in some images even
with very low IOPS and bandwidth while the read latency is around 0.2-0.7
ms.
For a cluster with only HDD is this la
Hey,
If is not a security bug you should use Ceph tracker
https://tracker.ceph.com/
Cheers!
On Thu, Mar 16, 2023 at 2:58 AM Patrick Vranckx <
patrick.vran...@uclouvain.be> wrote:
> Hi,
>
> I suspect a bug in cephadm to configure ingress service for rgw. Our
> production server was upgraded from
>
> We have been using BlueJeans to meet and record some of our meetings
> that later get posted to our YouTube channel. Unfortunately, we have
> to figure out a new meeting platform due to Red Hat discontinuing
> BlueJeans by the end of this month.
>
> Google Meets is an option, but some users
Hi everyone,
We have been using BlueJeans to meet and record some of our meetings
that later get posted to our YouTube channel. Unfortunately, we have
to figure out a new meeting platform due to Red Hat discontinuing
BlueJeans by the end of this month.
Google Meets is an option, but some users in
Hi all,
I have a 9 node cluster running *Pacific 16.2.10*. OSDs live on 9 of the
nodes with each one having 4 x 1.8T ssd and 8 x 10.9T hdd for a total of
108 OSDs. We create three crush roots as belows.
1. The hdds (8x9=72) of all nodes form a large crush root, which is used as
a data pool, and o
I agree it should be in release notes or documentation, it took me 3 days to
track it down and I was searching for all kinds of combinations of "cephfs nfs"
and "ceph nfs permissions".
Perhaps just having this thread archived will make it easier for the next
person to find the answer, though.
You found the right keywords yourself (application metadata), but I'm
glad it worked for you. I only found this tracker issue [2] which
fixes the behavior when issuing a "fs new" command, and it contains
the same workaround (set the application metadata). Maybe this should
be part of the (u
YES!! That fixed it.
I issued the following commands to update the application_metadata on the
cephfs pools and now its working. THANK YOU!
ceph osd pool application set cephfs_data cephfs data cephfs
ceph osd pool application set cephfs_metadata cephfs data cephfs
Now the application_metadat
It sounds a bit like this [1], doesn't it? Setting the application
metadata is just:
ceph osd pool application set cephfs
cephfs
[1] https://www.suse.com/support/kb/doc/?id=20812
Zitat von Wyll Ingersoll :
Yes, with this last upgrade (pacific) we migrated to the
orchestrated model
Yes, with this last upgrade (pacific) we migrated to the orchestrated model
where everything is in containers. Previously, we managed nfs-ganesha ourselves
and exported shares using FSAL VFS over /cephfs mounted on the NFS server.
With orchestrated ceph managed NFS, ganesha runs in a container a
That would have been my next question, if it had worked before. So the
only difference is the nfs-ganesha deployment and different (newer?)
clients than before? Unfortunately, I don't have any ganesha instance
running in any of my (test) clusters. Maybe someone else can chime in.
Zitat von
Nope, that didn't work. I updated the caps to add "allow r path=/" to the mds,
but it made no difference. I restarted the nfs container and unmounted/mounted
the share on the client.
The caps now look like:
key = xxx
caps mds = "allow rw path=/exports/nfs/foobar, allow r path=/"
Hi,
I tried to respond directly in the web ui of the mailing list but my
message is queued for moderation. I just wanted to update a solution
that worked for me when a service spec is stuck in a pending state,
maybe this will help others in the same situation.
While playing around with a
Hi Christian,
Replies are inline.
On Wed, Mar 15, 2023 at 9:27 PM Christian Rohmann <
christian.rohm...@inovex.de> wrote:
> Hello ceph-users,
>
> unhappy with the capabilities in regards to bucket access policies when
> using the Keystone authentication module
> I posted to this ML a while back
Hi,
I suspect a bug in cephadm to configure ingress service for rgw. Our
production server was upgraded from continuously from Luminous to
Pacific. When configuring ingress service for rgw, the haproxy.cfg is
incomplete. The same yaml file applied on our test cluster does the job.
Regards,
Janne,
Thanks for your advice. I'll have a try. :)
On 2023/3/16 15:00, Janne Johansson wrote:
Den tors 16 mars 2023 kl 06:42 skrev Norman :
Janne,
Thanks for your reply. To reduce the cost of recovering OSDs while
WAL/DB device is down, maybe I have no
choice but add more WAL/DB devices.
We
> Op 16 mrt. 2023 om 05:30 heeft Arush Sharma het
> volgende geschreven:
>
> Dear Ceph Team,
>
> I hope this email finds you well. I am writing to express my keen interest
> in participating in the Google Summer of Code (GSoC) program 2023 with your
> team.
>
> I am a 3rd year B.tech student
On 14.03.23 15:22, b...@nocloud.ch wrote:
ah.. ok, it was not clear to me that skipping minor version when doing a major
upgrade was supported.
You can even skip one major version when doing an upgrade.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
Maybe worth to mention, because it caught me by surprise:
Ubuntu creates a swap file (/swap.img) if you do not specify a swap
partition (check /etc/fstab).
Cheers
Boris
Am Mi., 15. März 2023 um 22:11 Uhr schrieb Anthony D'Atri <
a...@dreamsnake.net>:
>
> With CentOS/Rocky 7-8 I’ve observed unex
Hi,
we saw this on a Nautilus cluster when Clients were updated so we had
to modify the client caps to allow read access for the "/" directory.
There's an excerpt in the SUSE docs [1] for that:
If clients with path restriction are used, the MDS capabilities need
to include read access to
Den tors 16 mars 2023 kl 06:42 skrev Norman :
> Janne,
>
> Thanks for your reply. To reduce the cost of recovering OSDs while
> WAL/DB device is down, maybe I have no
> choice but add more WAL/DB devices.
We do run one ssd-or-nvme for several OSD hdd drives and have not seen
this as a problem in i
26 matches
Mail list logo