[ceph-users] Re: Delete objects on large bucket very slow

2019-09-12 Thread tuan dung
Anyone help me explain: How does ceph delete an object? how flow of operator detete work? - Br, Dương Tuấn Dũng 0986153686 On Fri, Sep 13, 2019 at 10:51 AM Konstantin Shalygin wrote: > On 9/13/19 10:46 AM, tuan dung wrote: > > thank for your answer. Can you explain c

[ceph-users] Re: Delete objects on large bucket very slow

2019-09-12 Thread Konstantin Shalygin
On 9/13/19 10:46 AM, tuan dung wrote: thank for your answer. Can you explain clearly for me about: - How does ceph's deleting object operator work? (it mean about how delete object flow in ceph works?) - using s3cmd tool which install in client to delete object, how to improve speed of delete o

[ceph-users] Re: Delete objects on large bucket very slow

2019-09-12 Thread tuan dung
Hi Paul, thank for your answer. Can you explain clearly for me about: - How does ceph's deleting object operator work? (it mean about how delete object flow in ceph works?) - using s3cmd tool which install in client to delete object, how to improve speed of delete object operator (objects/s) with i

[ceph-users] Legacy Bluestore Stats

2019-09-12 Thread gaving
I created a cluster on Nautilus 14.2.0, then upgraded to 14.2.1, and finally 14.2.3 Now I am seeing this warning that I thought should only appear if the cluster was created pre-Nautilus. Legacy BlueStore stats reporting detected on XX OSD I can't seem to find any information about this on a c

[ceph-users] Re: ceph fs crashes on simple fio test

2019-09-12 Thread Robert LeBlanc
On Tue, Sep 10, 2019 at 1:11 PM Frank Schilder wrote: > Hi Robert, > > I have meta data on SSD (3xrep) and data on 8+2 EC on spinning disks, so > the speed difference is orders of magnitudes. Our usage is quite meta data > heavy, so this suits us well. In particular since EC pools are high > thro

[ceph-users] Re: 645% Clean PG's in Dashboard

2019-09-12 Thread Lenz Grimmer
Hi, On 8/30/19 10:54 AM, c.li...@falseprivacy.org wrote: > I also enabled the Dashboard and saw that the PG Status showed "645% > clean" PG's. This cluster was originally installed with version > Jewel, so may it be any legacy setting or such that causing this? No, this seems to be a genuine bug

[ceph-users] Re: Bug identified: Dashboard proxy configuration is not working as expected

2019-09-12 Thread Lenz Grimmer
Hi Thomas, On 9/12/19 10:21 AM, Thomas wrote: > I have successfully configured Ceph dashboard following the this > documentation. > > According to the documentation you can configure a URL prefix with this > command: > ceph config set mgr mgr/das

[ceph-users] Re: the different between flag system and admin when create user rgw

2019-09-12 Thread Casey Bodley
Admin users bypass most authorization checks, so are useful for maintenance on buckets/objects owned by other users. They also have full read/write capabilities to the /admin/ apis. System users are for requests made on behalf of radosgw itself. Multisite replication requires a system user in

[ceph-users] Re: Delete objects on large bucket very slow

2019-09-12 Thread Paul Emmerich
--max-concurrent-ios helps with deletion speed, there's also a memory leak during deletion which is being fixed here: https://github.com/ceph/ceph/pull/30174 Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München

[ceph-users] How to use radosgw-min find ?

2019-09-12 Thread EDH - Manuel Rios Fernandez
Hi! We're looking to mantain our rgw pools out of orphans objects, checking the documentation and mailist is not really clear how it works and what will do. Radosgw-admin orphands find -pool= --job-id= Loops over all objects in the cluster looking for leaked objects and add it to a shar

[ceph-users] CentOS deps for ceph-mgr-diskprediction-local

2019-09-12 Thread Dan van der Ster
Hi there, Did anyone get the mgr diskprediction-local plugin working on CentOS ? When I enable the plugin with v14.2.3 I get: HEALTH_ERR 2 mgr modules have failed MGR_MODULE_ERROR 2 mgr modules have failed Module 'devicehealth' has failed: Failed to import _strptime because the import lockis

[ceph-users] Re: MDSs report slow metadata IOs

2019-09-12 Thread burcarjo
The current setup is only for testing functionality with ceph. My idea was to install a production setup with suitable hardware if all goes fine ... MDS is running on a node with 4 GB RAM, 1Gb/E and 4 core processor, The metadata pool is on 3 OSDs servers with 2 OSD per node: 24 GB RAM, 12 core

[ceph-users] the different between flag system and admin when create user rgw

2019-09-12 Thread Wahyu Muqsita
when create user rgw using command : radosgw-admin user create --uid={username} --display-name="{display-name}" [--email={email}] There are 2 flag that I can use —system and —admin. What is this flag for ? -- Wahyu Muqsita Wardana Sys

[ceph-users] Bug identified: Dashboard proxy configuration is not working as expected

2019-09-12 Thread Thomas
Hi, I have successfully configured Ceph dashboard following the this documentation. According to the documentation you can configure a URL prefix with this command: ceph config set mgr mgr/dashboard/url_prefix $PREFIX However when I try to access

[ceph-users] Re: ceph-volume lvm create leaves half-built OSDs lying around

2019-09-12 Thread Matthew Vernon
On 11/09/2019 12:23, Jan Fajerski wrote: On Wed, Sep 11, 2019 at 11:17:47AM +0100, Matthew Vernon wrote: We keep finding part-made OSDs (they appear not attached to any host, and down and out; but still counting towards the number of OSDs); we never saw this with ceph-disk. On investigation, t