Hi Frank,
interesting findings indeed.
Unfortunately I'm absolutely unfamiliar with this disk scheduler stuff
in linux. From my experience I've never faced any issues with that and
never needed to tune everything at this level..
But - given that AFAIK you're the only who faced the issue and
Our application will be pretty much Write Once/Read Many, with rare
updates to any object. We'll have < 500TiB of total storage with < 1B
objects. While I'm leaning toward a single bucket to hold the data, I do
have some questions:
Is there an advantage to sharding the data being as it will be
I think you can do it like:
```
service_type: rgw
service_id: main
service_name: rgw.main
placement:
label: rgwmain
spec:
config:
rgw_keystone_admin_user: swift
```
?
From: Thilo-Alexander Ginkel
Sent: Thursday, November 17, 2022 10:21 AM
To: Case
Hello Casey,
On Thu, Nov 17, 2022 at 6:52 PM Casey Bodley wrote:
> it doesn't look like cephadm supports extra frontend options during
> deployment. but these are stored as part of the `rgw_frontends` config
> option, so you can use a command like 'ceph config set' after
> deployment to add requ
it doesn't look like cephadm supports extra frontend options during
deployment. but these are stored as part of the `rgw_frontends` config
option, so you can use a command like 'ceph config set' after
deployment to add request_timeout_ms
On Thu, Nov 17, 2022 at 11:18 AM Thilo-Alexander Ginkel
wro
Hi Casey,
one followup question: We are using cephadm to deploy our Ceph cluster. How
would we configure the timeout setting using a service spec through cephadm?
Thanks,
Thilo
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
Hi
I've a poc that uses vmware and will need to have a shared datastore across
multiple hosts. I've created 3 RBDs and shared them out via iscsi. The hosts
and RBDs are in a host group together. The esxi hosts can see the LUNs, but
only one is able to write to them at a time. I've disabled
The SUSE docs [1] cover that part (in general):
---snip---
- Tell the Ceph cluster not to mark OSDs as out:
ceph osd set noout
- Stop daemons and nodes in the following order:
Storage clients
Gateways, for example NFS Ganesha or Object Gateway
Metadata Server
Ceph OSD
Ceph Ma
Hi,
Is there a recommended way of shutting a cephadm cluster down completely?
I tried using cephadm to stop all the services but hit the following
message.
"Stopping entire osd.osd service is prohibited"
Thanks
___
ceph-users mailing list -- ceph-user
On Thu, Nov 17, 2022 at 6:02 PM phandaal wrote:
> On 2022-11-17 12:58, Milind Changire wrote:
> > Christian,
> > Some obvious questions ...
> >
> >1. What Linux distribution have you deployed Ceph on ?
>
> Gentoo Linux, using python 3.10.
> Ceph is only used for CephFS, data pool using EC8+3
I've found that the bucket does not exist in omap.
Here:
`rados -p default.rgw.meta --namespace=users.uid listomapkeys
prod_user.buckets`
How could this happen?
Thanks.
On Thu, Nov 17, 2022 at 10:14 AM Jonas Nemeikšis
wrote:
> Hi Cephers.
>
> I'm facing a strange issue with the S3 bucket li
On 2022-11-17 12:58, Milind Changire wrote:
Christian,
Some obvious questions ...
1. What Linux distribution have you deployed Ceph on ?
Gentoo Linux, using python 3.10.
Ceph is only used for CephFS, data pool using EC8+3 on spinners,
metadata using replication on SSDs.
2. The snap_sc
Hi Igor,
I might have a smoking gun. Could it be that ceph (the kernel??) has issues
with certain disk schedulers? There was a recommendation on this list to use
bfq with bluestore. This was actually the one change other than ceph version
during upgrade: to make bfq default. Now, this might be
Christian,
Some obvious questions ...
1. What Linux distribution have you deployed Ceph on ?
2. The snap_schedule db has indeed been moved to an SQLite DB in rados
in Quincy.
So, is there ample storage space in your metadata pool to move this DB
to ?
On Thu, Nov 17, 2022 at 2:53
Hi Patrick,
sorry for the mail flood. The reason I'm asking is that I always see these
pairs of warnings:
slow request 34.592600 seconds old, received at
2022-11-17T10:44:39.650761+0100: internal op exportdir:mds.3:15730122 currently
failed to wrlock, waiting
slow request 41.092127 seconds old
Hi Patrick,
thanks for your explanation. Is there a way to check which directory is
exported? For example, is the inode contained in the messages somewhere? A
readdir would usually happen on log-in and the number of slow exports seems
much higher than the number of people logging in (I would as
Hi all,
After upgrading from 16.2.10 to 17.2.5, the snap_schedule dashboard
module does not start anymore (everything else is just fine).
I had snap scheduled with this module in my cephfs, working perfectly on
16.2.10, but I couldn't find them anymore after upgrade, dut to the
module being un
Hi all,
I try to install a new rgw node. After trying to execute this command:
/usr/bin/radosgw -f --cluster ceph --name client.rgw.s3-001 --setuser ceph
--setgroup ceph --keyring=/etc/ceph/ceph.client.admin.keyring --conf
/etc/ceph/ceph.conf -m 10.0.111.13
I get:
2022-11-16T15:37:39.291+01
Hi Cephers.
I'm facing a strange issue with the S3 bucket list. It's Pacific 16.2.9.
In a cluster one bucket(not found more yet) cannot list the owner. For
example: `aws s3api list-buckets`. I can list this bucket with
`radosgw-admin bucket list` but if I try to add uid I also cannot view the
buc
19 matches
Mail list logo