Re: [ceph-users] RGW performance with low object sizes

2019-12-04 Thread Christian
> > >> There's a bug in the current stable Nautilus release that causes a loop > and/or crash in get_obj_data::flush (you should be able to see it gobbling > up CPU in perf top). This is the related issue: > https://tracker.ceph.com/issues/39660 -- it should be fixed as soon as > 14.2.5 is released

Re: [ceph-users] RGW performance with low object sizes

2019-12-03 Thread Paul Emmerich
On Tue, Dec 3, 2019 at 6:43 PM Robert LeBlanc wrote: > > On Tue, Dec 3, 2019 at 9:11 AM Ed Fisher wrote: >> >> >> >> On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote: >> >> Did you make progress on this? We have a ton of < 64K objects as well and >> are struggling to get good performance out o

Re: [ceph-users] RGW performance with low object sizes

2019-12-03 Thread Robert LeBlanc
On Tue, Dec 3, 2019 at 9:11 AM Ed Fisher wrote: > > > On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote: > > Did you make progress on this? We have a ton of < 64K objects as well and > are struggling to get good performance out of our RGW. Sometimes we have > RGW instances that are just gobbling

Re: [ceph-users] RGW performance with low object sizes

2019-12-03 Thread Ed Fisher
> On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote: > > Did you make progress on this? We have a ton of < 64K objects as well and are > struggling to get good performance out of our RGW. Sometimes we have RGW > instances that are just gobbling up CPU even when there are no requests to > the

Re: [ceph-users] RGW performance with low object sizes

2019-12-03 Thread Robert LeBlanc
On Tue, Nov 19, 2019 at 9:34 AM Christian wrote: > Hi, > > I used https://github.com/dvassallo/s3-benchmark to measure some >> performance values for the rgws and got some unexpected results. >> Everything above 64K has excellent performance but below it drops down to >> a fraction of the speed a

[ceph-users] RGW bucket stats - strange behavior & slow performance requiring RGW restarts

2019-12-03 Thread David Monschein
Hi all, I've been observing some strange behavior with my object storage cluster running Nautilus 14.2.4. We currently have around 1800 buckets (A small percentage of those buckets are actively used), with a total of 13.86M objects. We have 20 RGWs right now, 10 for regular S3 access, and 10 for s

Re: [ceph-users] RGW performance with low object sizes

2019-11-19 Thread Christian
Hi, I used https://github.com/dvassallo/s3-benchmark to measure some > performance values for the rgws and got some unexpected results. > Everything above 64K has excellent performance but below it drops down to > a fraction of the speed and responsiveness resulting in even 256K objects > being fa

[ceph-users] RGW performance with low object sizes

2019-11-14 Thread Christian
Hi, I used https://github.com/dvassallo/s3-benchmark to measure some performance values for the rgws and got some unexpected results. Everything above 64K has excellent performance but below it drops down to a fraction of the speed and responsiveness resulting in even 256K objects being faster tha

[ceph-users] RGW DNS bucket names with multi-tenancy

2019-11-01 Thread Florian Engelmann
 Hi, is it possible to access buckets like: https://../? Some SDKs use DNS bucket names only. Using such an SDK the enpoint would look like ".". All the best, Florian smime.p7s Description: S/MIME cryptographic signature ___ ceph-users mailing li

Re: [ceph-users] RGW/swift segments

2019-10-31 Thread Peter Eisch
r with the RGWObjectExpirer code who I could confer? peter From: ceph-users on behalf of Peter Eisch Date: Monday, October 28, 2019 at 3:06 PM To: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] RGW/swift segments I should have noted this is with Luminous 12.2.12 and consistent with

Re: [ceph-users] RGW/swift segments

2019-10-31 Thread Peter Eisch
error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message. v2.64 From: ceph-users on behalf of Peter Eisch Date: Monday, October 28, 2019 at 3:06 PM To: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] RGW/swift segments I s

Re: [ceph-users] RGW/swift segments

2019-10-28 Thread Peter Eisch
(s), please immediately notify the sender and delete this e-mail message. v2.64 From: ceph-users on behalf of Peter Eisch Date: Monday, October 28, 2019 at 9:28 AM To: "ceph-users@lists.ceph.com" Subject: [ceph-users] RGW/swift segments Hi, When uploading to RGW via swift I

[ceph-users] RGW/swift segments

2019-10-28 Thread Peter Eisch
Hi, When uploading to RGW via swift I can set an expiration time. The files being uploaded are large. We segment them using the swift upload ‘-S’ arg. This results in a 0-byte file in the bucket and all the data frags landing in a *_segments bucket. When the expiration passes the 0-byte fil

Re: [ceph-users] rgw: multisite support

2019-10-11 Thread M Ranga Swami Reddy
Air.com >> >> >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Joachim Kraftmayer >> Sent: Friday, October 04, 2019 7:50 AM >> To: M Ranga Swami Reddy >> Cc: ceph-users; d...@ceph.io >> Subject: Re: [ceph-users] rgw: multi

Re: [ceph-users] rgw: multisite support

2019-10-07 Thread M Ranga Swami Reddy
: Friday, October 04, 2019 7:50 AM > To: M Ranga Swami Reddy > Cc: ceph-users; d...@ceph.io > Subject: Re: [ceph-users] rgw: multisite support > > Maybe this will help you: > > https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site >

Re: [ceph-users] rgw: multisite support

2019-10-04 Thread DHilsbos
...@performair.com www.PerformAir.com From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Joachim Kraftmayer Sent: Friday, October 04, 2019 7:50 AM To: M Ranga Swami Reddy Cc: ceph-users; d...@ceph.io Subject: Re: [ceph-users] rgw: multisite support Maybe this will help you: https

Re: [ceph-users] rgw: multisite support

2019-10-04 Thread Joachim Kraftmayer
Maybe this will help you: https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site ___ Clyso GmbH Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy: Thank you. Do we have a quick document to do this migration? Thanks

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Thank you. Do we have a quick document to do this migration? Thanks Swami On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich wrote: > On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy > wrote: > > > > Below url says: "Switching from a standalone deployment to a multi-site > replicated deployment i

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread Paul Emmerich
On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy wrote: > > Below url says: "Switching from a standalone deployment to a multi-site > replicated deployment is not supported. > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html this is wrong, m

Re: [ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Below url says: "Switching from a standalone deployment to a multi-site replicated deployment is not supported. https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html Please advise. On Thu, Oct 3, 2019 at 3:28 PM M Ranga Swami Reddy wrote: > Hi, >

[ceph-users] rgw: multisite support

2019-10-03 Thread M Ranga Swami Reddy
Hi, Iam using the 2 ceph clusters in diff DCs (away by 500 KM) with ceph 12.2.11 version. Now, I want to setup rgw multisite using the above 2 ceph clusters. is it possible? if yes, please share good document to do the same. Thanks Swami ___ ceph-users

Re: [ceph-users] rgw S3 lifecycle cannot keep up

2019-10-03 Thread Christian Pedersen
Thank you Robin. Looking at the video it doesn't seem like a fix is anywhere near ready. Am I correct in concluding that Ceph is not the right tool for my use-case? Cheers, Christian On Oct 3 2019, at 6:07 am, Robin H. Johnson wrote: > On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pederse

Re: [ceph-users] rgw S3 lifecycle cannot keep up

2019-10-02 Thread Robin H. Johnson
On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pedersen wrote: > Hi Martin, > > Even before adding cold storage on HDD, I had the cluster with SSD only. That > also could not keep up with deleting the files. > I am no where near I/O exhaustion on the SSDs or even the HDDs. Please see my pres

Re: [ceph-users] rgw S3 lifecycle cannot keep up

2019-10-02 Thread Christian Pedersen
Hi Martin, Even before adding cold storage on HDD, I had the cluster with SSD only. That also could not keep up with deleting the files. I am no where near I/O exhaustion on the SSDs or even the HDDs. Cheers, Christian On Oct 2 2019, at 1:23 pm, Martin Verges wrote: > Hello Christian, > > the

Re: [ceph-users] rgw S3 lifecycle cannot keep up

2019-10-02 Thread Martin Verges
Hello Christian, the problem is, that HDD is not capable of providing lots of IOs required for "~4 million small files". -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Mar

[ceph-users] rgw S3 lifecycle cannot keep up

2019-10-02 Thread Christian Pedersen
Hi, Using the S3 gateway I store ~4 million small files in my cluster every day. I have a lifecycle setup to move these files to cold storage after a day and delete them after two days. The default storage is SSD based and the cold storage is HDD. However the rgw lifecycle process cannot keep up

[ceph-users] RGW: Upgrade from mimic 13.2.6 -> nautilus 14.2.2 causes Bad Requests on some buckets

2019-08-29 Thread Jacek Suchenia
Hi Recently, after few weeks of tests of Nautilus on our clusters we decided to upgrade our oldest one (installed in 2012 as bobtail release). After gateway upgrade we found that only for some buckets (40% from ~2000) the same request is handled differently. With mimic RGW - OK (200), with nautilu

Re: [ceph-users] RGW how to delete orphans

2019-08-13 Thread Andrei Mikhailovsky
ge. Anyone can suggest on the next steps please? Cheers Andrei - Original Message - > From: "Florian Engelmann" > To: "Andreas Calminder" , "Christian Wuerdig" > > Cc: "ceph-users" > Sent: Friday, 26 October, 2018 11:28:19

Re: [ceph-users] RGW 4 MiB objects

2019-08-01 Thread Thomas Bennett
Hi Aleksey, Thanks for the detailed breakdown! We're currently using replication pools but will be testing ec pools soon enough and this is a useful set of parameters to look at. Also, I had not considered the bluestore parameters, thanks for pointing that out. Kind regards On Wed, Jul 31, 2019

Re: [ceph-users] RGW - Multisite setup -> question about Bucket - Sharding, limitations and synchronization

2019-07-31 Thread Eric Ivancich
> On Jul 30, 2019, at 7:49 AM, Mainor Daly > wrote: > > Hello, > > (everything in context of S3) > > > I'm currently trying to better understand bucket sharding in combination with > an multisite - rgw setup and possible limitations. > > At the moment I understand that a bucket has a bucke

Re: [ceph-users] RGW 4 MiB objects

2019-07-31 Thread Aleksey Gutikov
Hi Thomas, We did some investigations some time before and got several rules how to configure rgw and osd for big files stored on erasure-coded pool. Hope it will be useful. And if I have any mistakes, please let me know. S3 object saving pipeline: - S3 object is divided into multipart shards

Re: [ceph-users] RGW configuration parameters

2019-07-30 Thread Casey Bodley
On 7/30/19 3:03 PM, Thomas Bennett wrote: Hi Casey, Thanks for your reply. Just to make sure I understand correctly-  would that only be if the S3 object size for the put/get is multiples of your rgw_max_chunk_size? whenever the object size is larger than a single chunk Kind regards, To

Re: [ceph-users] RGW configuration parameters

2019-07-30 Thread Thomas Bennett
Hi Casey, Thanks for your reply. Just to make sure I understand correctly- would that only be if the S3 object size for the put/get is multiples of your rgw_max_chunk_size? Kind regards, Tom On Tue, 30 Jul 2019 at 16:57, Casey Bodley wrote: > Hi Thomas, > > I see that you're familiar with rg

Re: [ceph-users] RGW configuration parameters

2019-07-30 Thread Casey Bodley
Hi Thomas, I see that you're familiar with rgw_max_chunk_size, which is the most object data that radosgw will write in a single osd request. Each PutObj and GetObj request will issue multiple osd requests in parallel, up to these configured window sizes. Raising these values can potentially

[ceph-users] RGW configuration parameters

2019-07-30 Thread Thomas Bennett
Does anyone know what these parameters are for. I'm not 100% sure I understand what a window is in context of rgw objects: - rgw_get_obj_window_size - rgw_put_obj_min_window_size The code points to throttling I/O. But some more info would be useful. Kind regards, Tom __

[ceph-users] RGW 4 MiB objects

2019-07-30 Thread Thomas Bennett
Hi, Does anyone out there use bigger than default values for rgw_max_chunk_size and rgw_obj_stripe_size? I'm planning to set rgw_max_chunk_size and rgw_obj_stripe_size to 20MiB, as it suits our use case and from our testing we can't see any obvious reason not to. Is there some convincing experi

[ceph-users] RGW - Multisite setup -> question about Bucket - Sharding, limitations and synchronization

2019-07-30 Thread Mainor Daly
Hello, (everything in context of S3) I'm currently trying to better understand bucket sharding in combination with an multisite - rgw setup and possible limitations. At the moment I understand that a bucket has a bucket index, which is a list of objects within the bucket. There are also inde

Re: [ceph-users] RGW Admin REST metadata caps

2019-07-23 Thread Casey Bodley
the /admin/metadata apis require caps of type "metadata" source: https://github.com/ceph/ceph/blob/master/src/rgw/rgw_rest_metadata.h#L37 On 7/23/19 12:53 PM, Benjeman Meekhof wrote: Ceph Nautilus, 14.2.2, RGW civetweb. Trying to read from the RGW admin api /metadata/user with request URL lik

Re: [ceph-users] RGW Admin REST metadata caps

2019-07-23 Thread Benjeman Meekhof
Please disregard, the listed caps are sufficient and there does not seem to be any issue here. Between adding the metadata caps and re-testing I made a mistake in passing credentials to the module and naturally received an AccessDenied for bad credentials. thanks, Ben On Tue, Jul 23, 2019 at 1

[ceph-users] RGW Admin REST metadata caps

2019-07-23 Thread Benjeman Meekhof
Ceph Nautilus, 14.2.2, RGW civetweb. Trying to read from the RGW admin api /metadata/user with request URL like: GET /admin/metadata/user?key=someuser&format=json But am getting a 403 denied error from RGW. Shouldn't the caps below be sufficient, or am I missing something? "caps": [ {

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-07-12 Thread Rudenko Aleksandr
Hi, Casey. Can you help me with my question? From: Konstantin Shalygin Date: Wednesday, 26 June 2019 at 07:29 To: Rudenko Aleksandr Cc: "ceph-users@lists.ceph.com" , Casey Bodley Subject: Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

Re: [ceph-users] RGW Beast crash 14.2.1

2019-07-11 Thread Casey Bodley
On 7/11/19 3:28 AM, EDH - Manuel Rios Fernandez wrote: Hi Folks, This night RGW crashed without sense using beast as fronted. We solved turning on civetweb again. Should be report to tracker? Please do. It looks like this crashed during startup. Can you please include the rgw_frontends co

[ceph-users] RGW Beast crash 14.2.1

2019-07-11 Thread EDH - Manuel Rios Fernandez
Hi Folks, This night RGW crashed without sense using beast as fronted. We solved turning on civetweb again. Should be report to tracker? Regards Manuel Centos 7.6 Linux ceph-rgw03 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-25 Thread Konstantin Shalygin
On 6/25/19 12:46 AM, Rudenko Aleksandr wrote: Hi, Konstantin. Thanks for the reply. I know about stale instances and that they remained from prior version. I ask about “marker” of bucket. I have bucket “clx” and I can see his current marker in stale-instances list. As I know, stale-instan

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-24 Thread Rudenko Aleksandr
. From: Konstantin Shalygin Date: Friday, 21 June 2019 at 15:30 To: Rudenko Aleksandr Cc: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe? Hi, folks. I have Luminous 12.2.12. Auto-resharding is enabled. In st

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Konstantin Shalygin
Hi, folks. I have Luminous 12.2.12. Auto-resharding is enabled. In stale instances list I have: # radosgw-admin reshard stale-instances list | grep clx "clx:default.422998.196", I have the same marker-id in bucket stats of this bucket: # radosgw-admin bucket stats --bucket clx | grep mar

[ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Rudenko Aleksandr
Hi, folks. I have Luminous 12.2.12. Auto-resharding is enabled. In stale instances list I have: # radosgw-admin reshard stale-instances list | grep clx "clx:default.422998.196", I have the same marker-id in bucket stats of this bucket: # radosgw-admin bucket stats --bucket clx | grep marke

[ceph-users] RGW Blocking Behaviour on Inactive / Incomplete PG

2019-06-15 Thread Romit Misra
Hi, I wanted to understand the nature of the RGW Threads Being Blocked on Requests for a PG which is currently in INACTIVE State 1.As long as the PG is inactive the requests stay blocked 2.Could the RGW Threads Use Event Based Model, if a PG is inactive, put the Current Request into a Block Queue

Re: [ceph-users] RGW Multisite Q's

2019-06-14 Thread Casey Bodley
On 6/12/19 11:49 AM, Peter Eisch wrote: Hi, Could someone be able to point me to a blog or documentation page which helps me resolve the issues noted below? All nodes are Luminous, 12.2.12; one realm, one zonegroup (clustered haproxies fronting), two zones (three rgw in each); All endpoint re

Re: [ceph-users] RGW 405 Method Not Allowed on CreateBucket

2019-06-14 Thread Casey Bodley
Hi Drew, Judging by the "PUT /" in the request line, this request is using the virtual hosted bucket format [1]. This means the bucket name is part of the dns name and Host header, rather than in the path of the http request. Making this work in radosgw takes a little extra configuration [2].

[ceph-users] RGW 405 Method Not Allowed on CreateBucket

2019-06-14 Thread Drew Weaver
Hello, I am using the latest AWS PHP SDK to create a bucket. Every time I attempt to do this in the log I see: 2019-06-14 11:42:53.092 7fdff5459700 1 civetweb: 0x55c5450249d8: redacted - - [14/Jun/2019:11:42:53 -0400] "PUT / HTTP/1.1" 405 405 - aws-sdk-php/3.100.3 GuzzleHttp/6.3.3 curl/7.29.0

[ceph-users] RGW Multisite Q's

2019-06-12 Thread Peter Eisch
Hi, Could someone be able to point me to a blog or documentation page which helps me resolve the issues noted below? All nodes are Luminous, 12.2.12; one realm, one zonegroup (clustered haproxies fronting), two zones (three rgw in each); All endpoint references to each zone go are an haproxy.

[ceph-users] RGW multisite sync issue

2019-05-28 Thread Matteo Dacrema
Hi All, I’ve configured a multisite deployment on Ceph Nautilus 14.2.1 with one zone group “eu", one master zone and two secondary zone. If I upload ( on the master zone ) for 200 objects of 80MB each and I delete all of them without waiting for the replication to finish I end up with one zone

Re: [ceph-users] RGW metadata pool migration

2019-05-23 Thread Konstantin Shalygin
What are the metadata pools in an RGW deployment that need to sit on the fastest medium to better the client experience from an access standpoint ? Also is there an easy way to migrate these pools in a PROD scenario with minimal to no-outage if possible ? Just change crush rule to place defaul

Re: [ceph-users] RGW metadata pool migration

2019-05-23 Thread Janne Johansson
Den ons 22 maj 2019 kl 17:43 skrev Nikhil Mitra (nikmitra) < nikmi...@cisco.com>: > Hi All, > > What are the metadata pools in an RGW deployment that need to sit on the > fastest medium to better the client experience from an access standpoint ? > > Also is there an easy way to migrate these pools

[ceph-users] RGW metadata pool migration

2019-05-22 Thread Nikhil Mitra (nikmitra)
Hi All, What are the metadata pools in an RGW deployment that need to sit on the fastest medium to better the client experience from an access standpoint ? Also is there an easy way to migrate these pools in a PROD scenario with minimal to no-outage if possible ? Regards, Nikhil __

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-18 Thread EDH - Manuel Rios Fernandez
ayo de 2019 10:14 Para: ceph-users@lists.ceph.com Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket On 10/05/2019 08:42, EDH - Manuel Rios Fernandez wrote: > Hi > > Yesterday night we added 2 Intel Optane Nvme > > Generated 4 partitions for get the max perfor

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-10 Thread Oscar Tiderman
objects is a painfull maybe this help to > allow software complete the listing? > > Best Regards > Manuel > > -Mensaje original- > De: Matt Benjamin Enviado el: viernes, 3 de mayo de > 2019 15:47 > Para: EDH - Manuel Rios Fernandez > CC: ceph-users >

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-09 Thread EDH - Manuel Rios Fernandez
---Mensaje original- De: ceph-users En nombre de EDH - Manuel Rios Fernandez Enviado el: sábado, 4 de mayo de 2019 15:53 Para: 'Matt Benjamin' CC: 'ceph-users' Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket Hi Folks, The user is telling us that the

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-04 Thread EDH - Manuel Rios Fernandez
fdeefdc%24/20190430074414/0.cbrevision:get_obj:http > status=206 > 2019-05-03 15:37:28.959 7f4a68484700 1 == req done > req=0x55f2fde20970 op status=-104 http_status=206 == > > > -Mensaje original- > De: EDH - Manuel Rios Fernandez Enviado > el: viernes

[ceph-users] RGW BEAST mimic backport dont show customer IP

2019-05-03 Thread EDH - Manuel Rios Fernandez
Hi Folks, We migrated our RGW from Citeweb to Beast as frontend backport to mimic, the performance is impressive compared with the old one. But. in ceph logs don't show client peer IP, checked with debug rgw = 1 and 2. Checked the documentation in ceph don't tell us much more. How w

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread Matt Benjamin
42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http > status=206 > 2019-05-03 15:37:28.959 7f4a68484700 1 == req done req=0x55f2fde20970 op > status=-104 http_status=206 ====== > > > -Mensaje original- > De: EDH - Manuel Rios Fern

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread EDH - Manuel Rios Fernandez
us=206 == -Mensaje original- De: EDH - Manuel Rios Fernandez Enviado el: viernes, 3 de mayo de 2019 15:12 Para: 'Matt Benjamin' CC: 'ceph-users' Asunto: RE: [ceph-users] RGW Bucket unable to list buckets 100TB bucket Hi Matt, Thanks for your help, We have done th

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread EDH - Manuel Rios Fernandez
min Enviado el: viernes, 3 de mayo de 2019 14:00 Para: EDH - Manuel Rios Fernandez CC: ceph-users Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket Hi Folks, Thanks for sharing your ceph.conf along with the behavior. There are some odd things there. 1. rgw_num_rados_

Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread Matt Benjamin
Hi Folks, Thanks for sharing your ceph.conf along with the behavior. There are some odd things there. 1. rgw_num_rados_handles is deprecated--it should be 1 (the default), but changing it may require you to check and retune the values for objecter_inflight_ops and objecter_inflight_op_bytes to b

[ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread EDH - Manuel Rios Fernandez
Hi, We got a ceph deployment 13.2.5 version, but several bucket with millions of files. services: mon: 3 daemons, quorum CEPH001,CEPH002,CEPH003 mgr: CEPH001(active) osd: 106 osds: 106 up, 106 in rgw: 2 daemons active data: pools: 17 pools, 7120 pgs o

Re: [ceph-users] RGW Beast frontend and ipv6 options

2019-05-03 Thread Wido den Hollander
On 5/2/19 4:08 PM, Daniel Gryniewicz wrote: > Based on past experience with this issue in other projects, I would > propose this: > > 1. By default (rgw frontends=beast), we should bind to both IPv4 and > IPv6, if available. > > 2. Just specifying port (rgw frontends=beast port=8000) should app

Re: [ceph-users] RGW Beast frontend and ipv6 options

2019-05-02 Thread Abhishek Lekshmanan
Daniel Gryniewicz writes: > After discussing with Casey, I'd like to propose some clarifications to > this. > > First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other > error binding is fatal, but that one we warn and continue. > > Second, we treat "port=" as expanding to "endpoin

Re: [ceph-users] RGW Beast frontend and ipv6 options

2019-05-02 Thread Daniel Gryniewicz
After discussing with Casey, I'd like to propose some clarifications to this. First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other error binding is fatal, but that one we warn and continue. Second, we treat "port=" as expanding to "endpoint=0.0.0.0:, endpoint=[::]". Then, w

Re: [ceph-users] RGW Beast frontend and ipv6 options

2019-05-02 Thread Daniel Gryniewicz
Based on past experience with this issue in other projects, I would propose this: 1. By default (rgw frontends=beast), we should bind to both IPv4 and IPv6, if available. 2. Just specifying port (rgw frontends=beast port=8000) should apply to both IPv4 and IPv6, if available. 3. If the use

[ceph-users] RGW Beast frontend and ipv6 options

2019-04-26 Thread Abhishek Lekshmanan
Currently RGW's beast frontend supports ipv6 via the endpoint configurable. The port option will bind to ipv4 _only_. http://docs.ceph.com/docs/master/radosgw/frontends/#options Since many Linux systems may default the sysconfig net.ipv6.bindv6only flag to true, it usually means that specifying

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Mike Lowe
I’ve run production Ceph/OpenStack since 2015. The reality is running OpenStack Newton (the last one with pki) with a post Nautilus release just isn’t going to work. You are going to have bigger problems than trying to make object storage work with keystone issued tokens. Worst case is you will

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Anthony D'Atri
I've been away from OpenStack for a couple of years now, so this may have changed. But back around the Icehouse release, at least, upgrading between OpenStack releases was a major undertaking, so backing an older OpenStack with newer Ceph seems like it might be more common than one might thin

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Sage Weil
[Adding ceph-users for better usability] On Fri, 19 Apr 2019, Radoslaw Zarzynski wrote: > Hello, > > RadosGW can use OpenStack Keystone as one of its authentication > backends. Keystone in turn had been offering many token variants > over the time with PKI/PKIz being one of them. Unfortunately, >

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-19 Thread Brian :
I've always used the standalone mac and Linux package version. Wasn't aware of the 'bundled software' in the installers. Ugh. Thanks for pointing it out. On Thursday, April 18, 2019, Janne Johansson wrote: > https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/ > > not saying it defi

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Janne Johansson
https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/ not saying it definitely is, or isn't malware-ridden, but it sure was shady at that time. I would suggest not pointing people to it. Den tors 18 apr. 2019 kl 16:41 skrev Brian : : > Hi Marc > > Filezilla has decent S3 support ht

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Brian :
Hi Marc Filezilla has decent S3 support https://filezilla-project.org/ ymmv of course! On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote: > > > I have been looking a bit at the s3 clients available to be used, and I > think they are quite shitty, especially this Cyberduck that processes > files w

[ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Marc Roos
I have been looking a bit at the s3 clients available to be used, and I think they are quite shitty, especially this Cyberduck that processes files with default reading rights to everyone. I am in the process to advice clients to use for instance this mountain duck. But I am not to happy abou

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-17 Thread Iain Buclaw
On Mon, 8 Apr 2019 at 10:33, Iain Buclaw wrote: > > On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote: > > > > Hi Christian, > > > > Dynamic bucket-index sharding for multi-site setups is being worked > > on, and will land in the N release cycle. > > > > What about removing orphaned shards on the

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-08 Thread Iain Buclaw
On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote: > > Hi Christian, > > Dynamic bucket-index sharding for multi-site setups is being worked > on, and will land in the N release cycle. > What about removing orphaned shards on the master? Is the existing tools able to work with that? On the secon

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-07 Thread Matt Benjamin
Hi Christian, Dynamic bucket-index sharding for multi-site setups is being worked on, and will land in the N release cycle. regards, Matt On Sun, Apr 7, 2019 at 6:59 PM Christian Balzer wrote: > > On Fri, 5 Apr 2019 11:42:28 -0400 Casey Bodley wrote: > > > Hi Iain, > > > > Resharding is not su

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-07 Thread Christian Balzer
On Fri, 5 Apr 2019 11:42:28 -0400 Casey Bodley wrote: > Hi Iain, > > Resharding is not supported in multisite. The issue is that the master zone > needs to be authoritative for all metadata. If bucket reshard commands run > on the secondary zone, they create new bucket instance metadata that the

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-06 Thread Iain Buclaw
On Fri, 5 Apr 2019 at 17:42, Casey Bodley wrote: > > Hi Iain, > > Resharding is not supported in multisite. The issue is that the master zone > needs to be authoritative for all metadata. If bucket reshard commands run on > the secondary zone, they create new bucket instance metadata that the ma

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-05 Thread Casey Bodley
Hi Iain, Resharding is not supported in multisite. The issue is that the master zone needs to be authoritative for all metadata. If bucket reshard commands run on the secondary zone, they create new bucket instance metadata that the master zone never sees, so replication can't reconcile those chan

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-04 Thread Iain Buclaw
On Wed, 3 Apr 2019 at 09:41, Iain Buclaw wrote: > > On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote: > > > > > > # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l > > 7511 > > > > # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l > > 3509 > > > > # ./radosgw-gc

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-03 Thread Iain Buclaw
On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote: > > > # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l > 7511 > > # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l > 3509 > > # ./radosgw-gc-bucket-indexes.sh secondary2.rgw.buckets.index | wc -l > 3801 > Docum

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-03 Thread Iain Buclaw
On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote: > > On Tue, 19 Feb 2019 at 10:05, Iain Buclaw wrote: > > > > On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote: > > > > > > On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote: > > > > > > > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote: > > > > > >

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-19 Thread Iain Buclaw
On Tue, 19 Feb 2019 at 10:05, Iain Buclaw wrote: > > On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote: > > > > On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote: > > > > > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote: > > > > > > > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > > > > >

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-19 Thread Iain Buclaw
On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote: > > On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote: > > > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote: > > > > > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > > > > > > > Hi, > > > > > > > > Following the update of one secondary site

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-19 Thread Iain Buclaw
On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote: > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote: > > > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > > > > > Hi, > > > > > > Following the update of one secondary site from 12.2.8 to 12.2.11, the > > > following warning have come up. > >

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-06 Thread Iain Buclaw
On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote: > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > > > Hi, > > > > Following the update of one secondary site from 12.2.8 to 12.2.11, the > > following warning have come up. > > > > HEALTH_WARN 1 large omap objects > > LARGE_OMAP_OBJECTS 1 larg

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-05 Thread Iain Buclaw
On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote: > > Hi, > > Following the update of one secondary site from 12.2.8 to 12.2.11, the > following warning have come up. > > HEALTH_WARN 1 large omap objects > LARGE_OMAP_OBJECTS 1 large omap objects > 1 large objects found in pool '.rgw.buckets.inde

[ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-02-05 Thread Iain Buclaw
Hi, Following the update of one secondary site from 12.2.8 to 12.2.11, the following warning have come up. HEALTH_WARN 1 large omap objects LARGE_OMAP_OBJECTS 1 large omap objects 1 large objects found in pool '.rgw.buckets.index' Search the cluster log for 'Large omap object found' for m

[ceph-users] RGW multipart objects

2019-01-31 Thread Niels Maumenee
We have a public object storage cluster running Ceph rados gateway lumious 12.2.4, which we plan to update soon. My question concerns some multipart object that appear to upload successfully but when retrieving the object the client can only get 4MB. An example would be radosgw-admin object stat --

[ceph-users] rgw expiration problem, a bug ?

2019-01-17 Thread Will Zhao
Hi all: I found when I set the bucket expiration rule , after the expiration date, when I upload a new object , it will be deleted , and I found the related code like the following: if (prefix_iter->second.expiration_date != boost::none) { //we have checked it before Why this should be true ?

Re: [ceph-users] rgw/s3: performance of range requests

2019-01-07 Thread Casey Bodley
On 1/7/19 3:15 PM, Giovani Rinaldi wrote: Hello! I've been wondering if range requests are more efficient than doing "whole" requests for relatively large objects (100MB-1GB). More precisely, my doubt is regarding the use of OSD/RGW resources, that is, does the entire object is retrieved from

[ceph-users] rgw/s3: performance of range requests

2019-01-07 Thread Giovani Rinaldi
Hello! I've been wondering if range requests are more efficient than doing "whole" requests for relatively large objects (100MB-1GB). More precisely, my doubt is regarding the use of OSD/RGW resources, that is, does the entire object is retrieved from the OSD only to be sliced afterwards? Or only

[ceph-users] Rgw bucket policy for multi tenant

2018-12-27 Thread Marc Roos
I have seen several post on the bucket lists, how do you change this for multitenant user: Tenant$tenuser { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred"]}, "Action": "s3:PutObjectAcl", "Resource": [

Re: [ceph-users] RGW Swift metadata dropped when S3 bucket versioning enabled

2018-12-05 Thread Matt Benjamin
Agree, please file a tracker issue with the info, we'll prioritize reproducing it. Cheers, Matt On Wed, Dec 5, 2018 at 11:42 AM Florian Haas wrote: > > On 05/12/2018 17:35, Maxime Guyot wrote: > > Hi Florian, > > > > Thanks for the help. I did further testing and narrowed it down to > > objects

Re: [ceph-users] RGW Swift metadata dropped when S3 bucket versioning enabled

2018-12-05 Thread Florian Haas
On 05/12/2018 17:35, Maxime Guyot wrote: > Hi Florian, > > Thanks for the help. I did further testing and narrowed it down to > objects that have been uploaded when the bucket has versioning enabled. > Objects created before that are not affected: all metadata operations > are still possible. > >

  1   2   3   4   5   6   7   >