Hello Frank,
On Tue, Jun 22, 2021 at 2:16 AM Frank Schilder wrote:
>
> Dear all,
>
> some time ago I reported that the kernel client resorts to a copy instead of
> move when moving a file across quota domains. I was told that the fuse client
> does not have this problem. If enough space is avai
On Mon, Jun 21, 2021 at 8:13 PM opengers wrote:
>
> Thanks for the answer, I still have some confusion when I see the explanation
> of "MDS_SLOW_REQUEST" from the document , as follows
> --
> MDS_SLOW_REQUEST
>
> Message
> “N slow requests are blocked”
>
> Description
> One or more client req
Hello,
As a follow-up of the thread "RBD migration between 2 EC pools : very slow".
I'm running Octopus 15.2.13.
RBD migration seems really fragile.
I started a migration to change the data pool (from an EC 3+2 to an EC 8+2) :
- rbd migration prepare
- rbd migration execute
=> 4% after 6h, and
Dan,
Thank you for the suggestion. Changing osd_max_pg_per_osd_hard_ratio to
10 and also setting mon_max_pg_per_osd to 500 allowed me to resume IO (I
did have to restart the OSDs with stuck slow ops).
I'll have to do some reading into why our PG count appears so high, and
if it's safe to lea
Or you can use radosgw_usage_exporter [1] and provide some graphs to end users
[1] https://github.com/blemmenes/radosgw_usage_exporter
k
Sent from my iPhone
> On 23 Jun 2021, at 11:59, Matthew Vernon wrote:
>
>
> I think you can't via S3; we collect these data and publish them out-of-band
>
> this looks like a bug, the topic should be created in the right tenant.
> please submit a tracker for that.
>
Thank you for confirming.
Created here https://tracker.ceph.com/issues/51331
> yes. topics are owned by the tenant. previously, they were owned by the
> user but since the same topic
Hi all,
I'm a new ceph user and try to install my first cluster.
I try to install pacific but as result I get octopus.
What's wrong here?
I've done:
# curl --silent --remote-name --location
https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
# chmod +x cephadm
# ./cephadm add-repo --re
Hi,
Stuck activating could be an old known issue: if the cluster has many
(>100) PGs per OSD, they may temporarily need to hold more than the
max (300) and therefore PGs get stuck activating.
We always use this option as a workaround:
osd max pg per osd hard ratio = 10.0
I suggest giving thi
On Wed, Jun 23, 2021 at 3:36 PM Marc wrote:
>
> From what kernel / ceph version is krbd usage on a osd node problematic?
>
> Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any
> issues.
>
> I can remember using a cephfs mount without any issues as well, until some
> specific
Le 2021-06-23 14:51, Alexander E. Patrakov a écrit :
вт, 22 июн. 2021 г. в 23:22, Gilles Mocellin
:
Hello Cephers,
On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm
migrating a 40
To image from a 3+2 EC pool to a 8+2 one.
The use case is Veeam backup on XFS filesystems, mounted
Hello!
We are in the process of expanding our CEPH cluster (by both adding OSD
hosts and replacing smaller-sized HDDs on our existing hosts). So far we
have gone host by host, removing the old OSDs, swapping the physical
HDDs, and re-adding them. This process has gone smooth, aside from one
i
On Wed, Jun 23, 2021 at 2:21 PM Daniel Iwan wrote:
> Hi
>
> I'm using Ceph Pacific 16.2.1
>
> I'm creating a topic as a user which belongs to a non-default tenant.
> I'm using AWS CLI 2 with v3 authentication enabled
>
> aws --profile=ceph-myprofile --endpoint=$HOST_S3_API --region="" sns
> creat
>From what kernel / ceph version is krbd usage on a osd node problematic?
Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any issues.
I can remember using a cephfs mount without any issues as well, until some
specific luminous update surprised me. So maybe nice to know when t
вт, 22 июн. 2021 г. в 23:22, Gilles Mocellin :
>
> Hello Cephers,
>
>
> On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm migrating a 40
> To image from a 3+2 EC pool to a 8+2 one.
>
> The use case is Veeam backup on XFS filesystems, mounted via KRBD.
>
>
> Backups are running, and I c
Hi
I'm using Ceph Pacific 16.2.1
I'm creating a topic as a user which belongs to a non-default tenant.
I'm using AWS CLI 2 with v3 authentication enabled
aws --profile=ceph-myprofile --endpoint=$HOST_S3_API --region="" sns
create-topic --name=fishtopic --attributes='{"push-endpoint": "
http://my
Hi,
I am trying to benchmark the Ceph rbd-nbd performance. Are there any
authentic existing benchmark results of rbd-nbd for comparison?
BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand wrote:
>
> On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote:
> > Hello List,
> >
> > oversudden i can not mount a specific rbd device anymore:
> >
> > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> > /etc/ceph/ceph.client.admin.k
Hi Yuval
Thank you very much for the link
This gave me some useful info from
https://github.com/ceph/ceph/tree/master/examples/boto3#aws-cli
Regards
Daniel
On Tue, 22 Jun 2021 at 18:34, Yuval Lifshitz wrote:
> Hi Daniel,
> You are correct, currently, only v2 auth is supported for topic manage
On 22/06/2021 12:58, Massimo Sgaravatto wrote:
Sorry for the very naive question:
I know how to set/check the rgw quota for a user (using radosgw-admin)
But how can a radosgw user check what is the quota assigned to his/her
account , using the S3 and/or the swift interface ?
I think you ca
Le 2021-06-22 20:21, Gilles Mocellin a écrit :
Hello Cephers,
On a capacitive Ceph cluster (13 nodes, 130 OSDs 8To HDD), I'm
migrating a 40
To image from a 3+2 EC pool to a 8+2 one.
The use case is Veeam backup on XFS filesystems, mounted via KRBD.
Backups are running, and I can see 200MB/
Thank you. The image solved our problem.
Jan
Von: David Orman
Gesendet: Dienstag, 22. Juni 2021 17:27:12
An: Jansen, Jan
Cc: ceph-users
Betreff: Re: [ceph-users] Having issues to start more than 24 OSDs per host
https://tracker.ceph.com/issues/50526
https://git
On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote:
> Hello List,
>
> oversudden i can not mount a specific rbd device anymore:
>
> root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> /etc/ceph/ceph.client.admin.keyring
> /dev/rbd0
>
> root@proxmox-backup:~# mount /dev/rbd0 /mnt/backu
22 matches
Mail list logo