>
> >> There's a bug in the current stable Nautilus release that causes a loop
> and/or crash in get_obj_data::flush (you should be able to see it gobbling
> up CPU in perf top). This is the related issue:
> https://tracker.ceph.com/issues/39660 -- it should be fixed as soon as
> 14.2.5 is released
On Tue, Dec 3, 2019 at 6:43 PM Robert LeBlanc wrote:
>
> On Tue, Dec 3, 2019 at 9:11 AM Ed Fisher wrote:
>>
>>
>>
>> On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote:
>>
>> Did you make progress on this? We have a ton of < 64K objects as well and
>> are struggling to get good performance out o
On Tue, Dec 3, 2019 at 9:11 AM Ed Fisher wrote:
>
>
> On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote:
>
> Did you make progress on this? We have a ton of < 64K objects as well and
> are struggling to get good performance out of our RGW. Sometimes we have
> RGW instances that are just gobbling
> On Dec 3, 2019, at 10:28 AM, Robert LeBlanc wrote:
>
> Did you make progress on this? We have a ton of < 64K objects as well and are
> struggling to get good performance out of our RGW. Sometimes we have RGW
> instances that are just gobbling up CPU even when there are no requests to
> the
On Tue, Nov 19, 2019 at 9:34 AM Christian wrote:
> Hi,
>
> I used https://github.com/dvassallo/s3-benchmark to measure some
>> performance values for the rgws and got some unexpected results.
>> Everything above 64K has excellent performance but below it drops down to
>> a fraction of the speed a
Hi all,
I've been observing some strange behavior with my object storage cluster
running Nautilus 14.2.4. We currently have around 1800 buckets (A small
percentage of those buckets are actively used), with a total of 13.86M
objects. We have 20 RGWs right now, 10 for regular S3 access, and 10 for
s
Hi,
I used https://github.com/dvassallo/s3-benchmark to measure some
> performance values for the rgws and got some unexpected results.
> Everything above 64K has excellent performance but below it drops down to
> a fraction of the speed and responsiveness resulting in even 256K objects
> being fa
Hi,
I used https://github.com/dvassallo/s3-benchmark to measure some
performance values for the rgws and got some unexpected results.
Everything above 64K has excellent performance but below it drops down to a
fraction of the speed and responsiveness resulting in even 256K objects
being faster tha
Hi,
is it possible to access buckets like:
https://../?
Some SDKs use DNS bucket names only. Using such an SDK the enpoint would
look like ".".
All the best,
Florian
smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing li
r with the RGWObjectExpirer code who I could confer?
peter
From: ceph-users on behalf of Peter Eisch
Date: Monday, October 28, 2019 at 3:06 PM
To: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] RGW/swift segments
I should have noted this is with Luminous 12.2.12 and consistent with
error, or are not the named
recipient(s), please immediately notify the sender and delete this e-mail
message.
v2.64
From: ceph-users on behalf of Peter Eisch
Date: Monday, October 28, 2019 at 3:06 PM
To: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] RGW/swift segments
I s
(s), please immediately notify the sender and delete this e-mail
message.
v2.64
From: ceph-users on behalf of Peter Eisch
Date: Monday, October 28, 2019 at 9:28 AM
To: "ceph-users@lists.ceph.com"
Subject: [ceph-users] RGW/swift segments
Hi,
When uploading to RGW via swift I
Hi,
When uploading to RGW via swift I can set an expiration time. The files being
uploaded are large. We segment them using the swift upload ‘-S’ arg. This
results in a 0-byte file in the bucket and all the data frags landing in a
*_segments bucket.
When the expiration passes the 0-byte fil
Air.com
>>
>>
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Joachim Kraftmayer
>> Sent: Friday, October 04, 2019 7:50 AM
>> To: M Ranga Swami Reddy
>> Cc: ceph-users; d...@ceph.io
>> Subject: Re: [ceph-users] rgw: multi
: Friday, October 04, 2019 7:50 AM
> To: M Ranga Swami Reddy
> Cc: ceph-users; d...@ceph.io
> Subject: Re: [ceph-users] rgw: multisite support
>
> Maybe this will help you:
>
> https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site
>
...@performair.com
www.PerformAir.com
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Joachim Kraftmayer
Sent: Friday, October 04, 2019 7:50 AM
To: M Ranga Swami Reddy
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rgw: multisite support
Maybe this will help you:
https
Maybe this will help you:
https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site
___
Clyso GmbH
Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy:
Thank you. Do we have a quick document to do this migration?
Thanks
Thank you. Do we have a quick document to do this migration?
Thanks
Swami
On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich wrote:
> On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy
> wrote:
> >
> > Below url says: "Switching from a standalone deployment to a multi-site
> replicated deployment i
On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy
wrote:
>
> Below url says: "Switching from a standalone deployment to a multi-site
> replicated deployment is not supported.
> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html
this is wrong, m
Below url says: "Switching from a standalone deployment to a multi-site
replicated deployment is not supported.
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html
Please advise.
On Thu, Oct 3, 2019 at 3:28 PM M Ranga Swami Reddy
wrote:
> Hi,
>
Hi,
Iam using the 2 ceph clusters in diff DCs (away by 500 KM) with ceph
12.2.11 version.
Now, I want to setup rgw multisite using the above 2 ceph clusters.
is it possible? if yes, please share good document to do the same.
Thanks
Swami
___
ceph-users
Thank you Robin.
Looking at the video it doesn't seem like a fix is anywhere near ready.
Am I correct in concluding that Ceph is not the right tool for my use-case?
Cheers,
Christian
On Oct 3 2019, at 6:07 am, Robin H. Johnson wrote:
> On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pederse
On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pedersen wrote:
> Hi Martin,
>
> Even before adding cold storage on HDD, I had the cluster with SSD only. That
> also could not keep up with deleting the files.
> I am no where near I/O exhaustion on the SSDs or even the HDDs.
Please see my pres
Hi Martin,
Even before adding cold storage on HDD, I had the cluster with SSD only. That
also could not keep up with deleting the files.
I am no where near I/O exhaustion on the SSDs or even the HDDs.
Cheers,
Christian
On Oct 2 2019, at 1:23 pm, Martin Verges wrote:
> Hello Christian,
>
> the
Hello Christian,
the problem is, that HDD is not capable of providing lots of IOs required
for "~4 million small files".
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Mar
Hi,
Using the S3 gateway I store ~4 million small files in my cluster every
day. I have a lifecycle setup to move these files to cold storage after a
day and delete them after two days.
The default storage is SSD based and the cold storage is HDD.
However the rgw lifecycle process cannot keep up
Hi
Recently, after few weeks of tests of Nautilus on our clusters we decided
to upgrade our oldest one (installed in 2012 as bobtail release). After
gateway upgrade we found that only for some buckets (40% from ~2000) the
same request is handled differently. With mimic RGW - OK (200), with
nautilu
ge.
Anyone can suggest on the next steps please?
Cheers
Andrei
- Original Message -
> From: "Florian Engelmann"
> To: "Andreas Calminder" , "Christian Wuerdig"
>
> Cc: "ceph-users"
> Sent: Friday, 26 October, 2018 11:28:19
Hi Aleksey,
Thanks for the detailed breakdown!
We're currently using replication pools but will be testing ec pools soon
enough and this is a useful set of parameters to look at. Also, I had not
considered the bluestore parameters, thanks for pointing that out.
Kind regards
On Wed, Jul 31, 2019
> On Jul 30, 2019, at 7:49 AM, Mainor Daly
> wrote:
>
> Hello,
>
> (everything in context of S3)
>
>
> I'm currently trying to better understand bucket sharding in combination with
> an multisite - rgw setup and possible limitations.
>
> At the moment I understand that a bucket has a bucke
Hi Thomas,
We did some investigations some time before and got several rules how to
configure rgw and osd for big files stored on erasure-coded pool.
Hope it will be useful.
And if I have any mistakes, please let me know.
S3 object saving pipeline:
- S3 object is divided into multipart shards
On 7/30/19 3:03 PM, Thomas Bennett wrote:
Hi Casey,
Thanks for your reply.
Just to make sure I understand correctly- would that only be if the
S3 object size for the put/get is multiples of your rgw_max_chunk_size?
whenever the object size is larger than a single chunk
Kind regards,
To
Hi Casey,
Thanks for your reply.
Just to make sure I understand correctly- would that only be if the S3
object size for the put/get is multiples of your rgw_max_chunk_size?
Kind regards,
Tom
On Tue, 30 Jul 2019 at 16:57, Casey Bodley wrote:
> Hi Thomas,
>
> I see that you're familiar with rg
Hi Thomas,
I see that you're familiar with rgw_max_chunk_size, which is the most
object data that radosgw will write in a single osd request. Each PutObj
and GetObj request will issue multiple osd requests in parallel, up to
these configured window sizes. Raising these values can potentially
Does anyone know what these parameters are for. I'm not 100% sure I
understand what a window is in context of rgw objects:
- rgw_get_obj_window_size
- rgw_put_obj_min_window_size
The code points to throttling I/O. But some more info would be useful.
Kind regards,
Tom
__
Hi,
Does anyone out there use bigger than default values for rgw_max_chunk_size
and rgw_obj_stripe_size?
I'm planning to set rgw_max_chunk_size and rgw_obj_stripe_size to 20MiB,
as it suits our use case and from our testing we can't see any obvious
reason not to.
Is there some convincing experi
Hello,
(everything in context of S3)
I'm currently trying to better understand bucket sharding in combination with
an multisite - rgw setup and possible limitations.
At the moment I understand that a bucket has a bucket index, which is a list of
objects within the bucket.
There are also inde
the /admin/metadata apis require caps of type "metadata"
source:
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_rest_metadata.h#L37
On 7/23/19 12:53 PM, Benjeman Meekhof wrote:
Ceph Nautilus, 14.2.2, RGW civetweb.
Trying to read from the RGW admin api /metadata/user with request URL lik
Please disregard, the listed caps are sufficient and there does not
seem to be any issue here. Between adding the metadata caps and
re-testing I made a mistake in passing credentials to the module and
naturally received an AccessDenied for bad credentials.
thanks,
Ben
On Tue, Jul 23, 2019 at 1
Ceph Nautilus, 14.2.2, RGW civetweb.
Trying to read from the RGW admin api /metadata/user with request URL like:
GET /admin/metadata/user?key=someuser&format=json
But am getting a 403 denied error from RGW. Shouldn't the caps below
be sufficient, or am I missing something?
"caps": [
{
Hi, Casey.
Can you help me with my question?
From: Konstantin Shalygin
Date: Wednesday, 26 June 2019 at 07:29
To: Rudenko Aleksandr
Cc: "ceph-users@lists.ceph.com" , Casey Bodley
Subject: Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm'
safe?
On 7/11/19 3:28 AM, EDH - Manuel Rios Fernandez wrote:
Hi Folks,
This night RGW crashed without sense using beast as fronted.
We solved turning on civetweb again.
Should be report to tracker?
Please do. It looks like this crashed during startup. Can you please
include the rgw_frontends co
Hi Folks,
This night RGW crashed without sense using beast as fronted.
We solved turning on civetweb again.
Should be report to tracker?
Regards
Manuel
Centos 7.6
Linux ceph-rgw03 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux
On 6/25/19 12:46 AM, Rudenko Aleksandr wrote:
Hi, Konstantin.
Thanks for the reply.
I know about stale instances and that they remained from prior version.
I ask about “marker” of bucket. I have bucket “clx” and I can see his
current marker in stale-instances list.
As I know, stale-instan
.
From: Konstantin Shalygin
Date: Friday, 21 June 2019 at 15:30
To: Rudenko Aleksandr
Cc: "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm'
safe?
Hi, folks.
I have Luminous 12.2.12. Auto-resharding is enabled.
In st
Hi, folks.
I have Luminous 12.2.12. Auto-resharding is enabled.
In stale instances list I have:
# radosgw-admin reshard stale-instances list | grep clx
"clx:default.422998.196",
I have the same marker-id in bucket stats of this bucket:
# radosgw-admin bucket stats --bucket clx | grep mar
Hi, folks.
I have Luminous 12.2.12. Auto-resharding is enabled.
In stale instances list I have:
# radosgw-admin reshard stale-instances list | grep clx
"clx:default.422998.196",
I have the same marker-id in bucket stats of this bucket:
# radosgw-admin bucket stats --bucket clx | grep marke
Hi,
I wanted to understand the nature of the RGW Threads Being Blocked on
Requests for a PG which is currently in INACTIVE State
1.As long as the PG is inactive the requests stay blocked
2.Could the RGW Threads Use Event Based Model, if a PG is inactive, put the
Current Request into a Block Queue
On 6/12/19 11:49 AM, Peter Eisch wrote:
Hi,
Could someone be able to point me to a blog or documentation page which helps
me resolve the issues noted below?
All nodes are Luminous, 12.2.12; one realm, one zonegroup (clustered haproxies
fronting), two zones (three rgw in each); All endpoint re
Hi Drew,
Judging by the "PUT /" in the request line, this request is using the
virtual hosted bucket format [1]. This means the bucket name is part of
the dns name and Host header, rather than in the path of the http
request. Making this work in radosgw takes a little extra configuration
[2].
Hello,
I am using the latest AWS PHP SDK to create a bucket.
Every time I attempt to do this in the log I see:
2019-06-14 11:42:53.092 7fdff5459700 1 civetweb: 0x55c5450249d8: redacted - -
[14/Jun/2019:11:42:53 -0400] "PUT / HTTP/1.1" 405 405 - aws-sdk-php/3.100.3
GuzzleHttp/6.3.3 curl/7.29.0
Hi,
Could someone be able to point me to a blog or documentation page which helps
me resolve the issues noted below?
All nodes are Luminous, 12.2.12; one realm, one zonegroup (clustered haproxies
fronting), two zones (three rgw in each); All endpoint references to each zone
go are an haproxy.
Hi All,
I’ve configured a multisite deployment on Ceph Nautilus 14.2.1 with one zone
group “eu", one master zone and two secondary zone.
If I upload ( on the master zone ) for 200 objects of 80MB each and I delete
all of them without waiting for the replication to finish I end up with one
zone
What are the metadata pools in an RGW deployment that need to sit on the
fastest medium to better the client experience from an access standpoint ?
Also is there an easy way to migrate these pools in a PROD scenario with
minimal to no-outage if possible ?
Just change crush rule to place defaul
Den ons 22 maj 2019 kl 17:43 skrev Nikhil Mitra (nikmitra) <
nikmi...@cisco.com>:
> Hi All,
>
> What are the metadata pools in an RGW deployment that need to sit on the
> fastest medium to better the client experience from an access standpoint ?
>
> Also is there an easy way to migrate these pools
Hi All,
What are the metadata pools in an RGW deployment that need to sit on the
fastest medium to better the client experience from an access standpoint ?
Also is there an easy way to migrate these pools in a PROD scenario with
minimal to no-outage if possible ?
Regards,
Nikhil
__
ayo de 2019 10:14
Para: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
On 10/05/2019 08:42, EDH - Manuel Rios Fernandez wrote:
> Hi
>
> Yesterday night we added 2 Intel Optane Nvme
>
> Generated 4 partitions for get the max perfor
objects is a painfull maybe this help to
> allow software complete the listing?
>
> Best Regards
> Manuel
>
> -Mensaje original-
> De: Matt Benjamin Enviado el: viernes, 3 de mayo de
> 2019 15:47
> Para: EDH - Manuel Rios Fernandez
> CC: ceph-users
>
---Mensaje original-
De: ceph-users En nombre de EDH - Manuel
Rios Fernandez
Enviado el: sábado, 4 de mayo de 2019 15:53
Para: 'Matt Benjamin'
CC: 'ceph-users'
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Folks,
The user is telling us that the
fdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 == req done
> req=0x55f2fde20970 op status=-104 http_status=206 ==
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fernandez Enviado
> el: viernes
Hi Folks,
We migrated our RGW from Citeweb to Beast as frontend backport to mimic, the
performance is impressive compared with the old one.
But. in ceph logs don't show client peer IP, checked with debug rgw = 1 and
2.
Checked the documentation in ceph don't tell us much more.
How w
42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http
> status=206
> 2019-05-03 15:37:28.959 7f4a68484700 1 == req done req=0x55f2fde20970 op
> status=-104 http_status=206 ======
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fern
us=206 ==
-Mensaje original-
De: EDH - Manuel Rios Fernandez
Enviado el: viernes, 3 de mayo de 2019 15:12
Para: 'Matt Benjamin'
CC: 'ceph-users'
Asunto: RE: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Matt,
Thanks for your help,
We have done th
min
Enviado el: viernes, 3 de mayo de 2019 14:00
Para: EDH - Manuel Rios Fernandez
CC: ceph-users
Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
Hi Folks,
Thanks for sharing your ceph.conf along with the behavior.
There are some odd things there.
1. rgw_num_rados_
Hi Folks,
Thanks for sharing your ceph.conf along with the behavior.
There are some odd things there.
1. rgw_num_rados_handles is deprecated--it should be 1 (the default),
but changing it may require you to check and retune the values for
objecter_inflight_ops and objecter_inflight_op_bytes to b
Hi,
We got a ceph deployment 13.2.5 version, but several bucket with millions of
files.
services:
mon: 3 daemons, quorum CEPH001,CEPH002,CEPH003
mgr: CEPH001(active)
osd: 106 osds: 106 up, 106 in
rgw: 2 daemons active
data:
pools: 17 pools, 7120 pgs
o
On 5/2/19 4:08 PM, Daniel Gryniewicz wrote:
> Based on past experience with this issue in other projects, I would
> propose this:
>
> 1. By default (rgw frontends=beast), we should bind to both IPv4 and
> IPv6, if available.
>
> 2. Just specifying port (rgw frontends=beast port=8000) should app
Daniel Gryniewicz writes:
> After discussing with Casey, I'd like to propose some clarifications to
> this.
>
> First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other
> error binding is fatal, but that one we warn and continue.
>
> Second, we treat "port=" as expanding to "endpoin
After discussing with Casey, I'd like to propose some clarifications to
this.
First, we do not treat EAFNOSUPPORT as a non-fatal error. Any other
error binding is fatal, but that one we warn and continue.
Second, we treat "port=" as expanding to "endpoint=0.0.0.0:,
endpoint=[::]".
Then, w
Based on past experience with this issue in other projects, I would
propose this:
1. By default (rgw frontends=beast), we should bind to both IPv4 and
IPv6, if available.
2. Just specifying port (rgw frontends=beast port=8000) should apply to
both IPv4 and IPv6, if available.
3. If the use
Currently RGW's beast frontend supports ipv6 via the endpoint
configurable. The port option will bind to ipv4 _only_.
http://docs.ceph.com/docs/master/radosgw/frontends/#options
Since many Linux systems may default the sysconfig net.ipv6.bindv6only
flag to true, it usually means that specifying
I’ve run production Ceph/OpenStack since 2015. The reality is running
OpenStack Newton (the last one with pki) with a post Nautilus release just
isn’t going to work. You are going to have bigger problems than trying to make
object storage work with keystone issued tokens. Worst case is you will
I've been away from OpenStack for a couple of years now, so this may have
changed. But back around the Icehouse release, at least, upgrading between
OpenStack releases was a major undertaking, so backing an older OpenStack with
newer Ceph seems like it might be more common than one might thin
[Adding ceph-users for better usability]
On Fri, 19 Apr 2019, Radoslaw Zarzynski wrote:
> Hello,
>
> RadosGW can use OpenStack Keystone as one of its authentication
> backends. Keystone in turn had been offering many token variants
> over the time with PKI/PKIz being one of them. Unfortunately,
>
I've always used the standalone mac and Linux package version. Wasn't aware
of the 'bundled software' in the installers. Ugh. Thanks for pointing it
out.
On Thursday, April 18, 2019, Janne Johansson wrote:
> https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/
>
> not saying it defi
https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/
not saying it definitely is, or isn't malware-ridden, but it sure was shady
at that time.
I would suggest not pointing people to it.
Den tors 18 apr. 2019 kl 16:41 skrev Brian : :
> Hi Marc
>
> Filezilla has decent S3 support ht
Hi Marc
Filezilla has decent S3 support https://filezilla-project.org/
ymmv of course!
On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote:
>
>
> I have been looking a bit at the s3 clients available to be used, and I
> think they are quite shitty, especially this Cyberduck that processes
> files w
I have been looking a bit at the s3 clients available to be used, and I
think they are quite shitty, especially this Cyberduck that processes
files with default reading rights to everyone. I am in the process to
advice clients to use for instance this mountain duck. But I am not to
happy abou
On Mon, 8 Apr 2019 at 10:33, Iain Buclaw wrote:
>
> On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote:
> >
> > Hi Christian,
> >
> > Dynamic bucket-index sharding for multi-site setups is being worked
> > on, and will land in the N release cycle.
> >
>
> What about removing orphaned shards on the
On Mon, 8 Apr 2019 at 05:01, Matt Benjamin wrote:
>
> Hi Christian,
>
> Dynamic bucket-index sharding for multi-site setups is being worked
> on, and will land in the N release cycle.
>
What about removing orphaned shards on the master? Is the existing
tools able to work with that?
On the secon
Hi Christian,
Dynamic bucket-index sharding for multi-site setups is being worked
on, and will land in the N release cycle.
regards,
Matt
On Sun, Apr 7, 2019 at 6:59 PM Christian Balzer wrote:
>
> On Fri, 5 Apr 2019 11:42:28 -0400 Casey Bodley wrote:
>
> > Hi Iain,
> >
> > Resharding is not su
On Fri, 5 Apr 2019 11:42:28 -0400 Casey Bodley wrote:
> Hi Iain,
>
> Resharding is not supported in multisite. The issue is that the master zone
> needs to be authoritative for all metadata. If bucket reshard commands run
> on the secondary zone, they create new bucket instance metadata that the
On Fri, 5 Apr 2019 at 17:42, Casey Bodley wrote:
>
> Hi Iain,
>
> Resharding is not supported in multisite. The issue is that the master zone
> needs to be authoritative for all metadata. If bucket reshard commands run on
> the secondary zone, they create new bucket instance metadata that the ma
Hi Iain,
Resharding is not supported in multisite. The issue is that the master zone
needs to be authoritative for all metadata. If bucket reshard commands run
on the secondary zone, they create new bucket instance metadata that the
master zone never sees, so replication can't reconcile those chan
On Wed, 3 Apr 2019 at 09:41, Iain Buclaw wrote:
>
> On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote:
> >
> >
> > # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l
> > 7511
> >
> > # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l
> > 3509
> >
> > # ./radosgw-gc
On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote:
>
>
> # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l
> 7511
>
> # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l
> 3509
>
> # ./radosgw-gc-bucket-indexes.sh secondary2.rgw.buckets.index | wc -l
> 3801
>
Docum
On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote:
>
> On Tue, 19 Feb 2019 at 10:05, Iain Buclaw wrote:
> >
> > On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote:
> > >
> > > On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote:
> > > >
> > > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
> > > > >
>
On Tue, 19 Feb 2019 at 10:05, Iain Buclaw wrote:
>
> On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote:
> >
> > On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote:
> > >
> > > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
> > > >
> > > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
> > > > >
>
On Tue, 19 Feb 2019 at 09:59, Iain Buclaw wrote:
>
> On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote:
> >
> > On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
> > >
> > > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
> > > >
> > > > Hi,
> > > >
> > > > Following the update of one secondary site
On Wed, 6 Feb 2019 at 09:28, Iain Buclaw wrote:
>
> On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
> >
> > On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
> > >
> > > Hi,
> > >
> > > Following the update of one secondary site from 12.2.8 to 12.2.11, the
> > > following warning have come up.
> >
On Tue, 5 Feb 2019 at 10:04, Iain Buclaw wrote:
>
> On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
> >
> > Hi,
> >
> > Following the update of one secondary site from 12.2.8 to 12.2.11, the
> > following warning have come up.
> >
> > HEALTH_WARN 1 large omap objects
> > LARGE_OMAP_OBJECTS 1 larg
On Tue, 5 Feb 2019 at 09:46, Iain Buclaw wrote:
>
> Hi,
>
> Following the update of one secondary site from 12.2.8 to 12.2.11, the
> following warning have come up.
>
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
> 1 large objects found in pool '.rgw.buckets.inde
Hi,
Following the update of one secondary site from 12.2.8 to 12.2.11, the
following warning have come up.
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool '.rgw.buckets.index'
Search the cluster log for 'Large omap object found' for m
We have a public object storage cluster running Ceph rados gateway lumious
12.2.4, which we plan to update soon.
My question concerns some multipart object that appear to upload
successfully but when retrieving the object the client can only get 4MB.
An example would be
radosgw-admin object stat --
Hi all:
I found when I set the bucket expiration rule , after the expiration
date, when I upload a new object , it will be deleted , and I found
the related code like the following:
if (prefix_iter->second.expiration_date != boost::none) {
//we have checked it before
Why this should be true ?
On 1/7/19 3:15 PM, Giovani Rinaldi wrote:
Hello!
I've been wondering if range requests are more efficient than doing
"whole" requests for relatively large objects (100MB-1GB).
More precisely, my doubt is regarding the use of OSD/RGW resources,
that is, does the entire object is retrieved from
Hello!
I've been wondering if range requests are more efficient than doing "whole"
requests for relatively large objects (100MB-1GB).
More precisely, my doubt is regarding the use of OSD/RGW resources, that
is, does the entire object is retrieved from the OSD only to be sliced
afterwards? Or only
I have seen several post on the bucket lists, how do you change this for
multitenant user: Tenant$tenuser
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred"]},
"Action": "s3:PutObjectAcl",
"Resource": [
Agree, please file a tracker issue with the info, we'll prioritize
reproducing it.
Cheers,
Matt
On Wed, Dec 5, 2018 at 11:42 AM Florian Haas wrote:
>
> On 05/12/2018 17:35, Maxime Guyot wrote:
> > Hi Florian,
> >
> > Thanks for the help. I did further testing and narrowed it down to
> > objects
On 05/12/2018 17:35, Maxime Guyot wrote:
> Hi Florian,
>
> Thanks for the help. I did further testing and narrowed it down to
> objects that have been uploaded when the bucket has versioning enabled.
> Objects created before that are not affected: all metadata operations
> are still possible.
>
>
1 - 100 of 670 matches
Mail list logo