att
>
> Thanks for your answer.
>
> Should I open a bug report then?
>
> How would I be able to read more from it? Have multiple threads access
> it and read from it simultaneously?
>
> Marc
>
> On 1/25/24 20:25, Matt Benjamin wrote:
> > Hi Marc,
> >
>
rr threshold)
> --- pthread ID / name mapping for recent threads ---
>7f2472a89b00 / safe_timer
>7f2472cadb00 / radosgw
>...
>log_file
>
> /var/lib/ceph/crash/2024-01-25T13:10:13.909546Z_01ee6e6a-e946-4006-9d32-e17ef2f9df74/log
> --- end dump of recent events
ot;:r30303:f3fec4b6-a248-4f3f-be75-b8055e61233a.33081.14",
> "started": "Wed, 06 Dec 2023 10:44:40 GMT",
> "status": "COMPLETE"
> },
> {
> "bucket":
> ":ec3201cam02:f3fec4b6-a248-4f3f-be75-b805
e radosgw-admin command.
> >>
> >> Have I missed a pagination limit for listing user buckets in the rados
> >> gateway?
> >>
> >> Thanks,
> >> Tom
> >> ___
> >> ceph-users mailing list
nchronize a policy change? Is this
> effective immediate with strong consistency or is there some propagation
> delay (hopefully on with some upper bound)?
>
>
> Best regards
> Matthias
> ___
> ceph-users mailing list -- ceph-user
h is basically an extension
> from zone-level redirect_zone. I found it helpful in realizing CopyObject
> with (x-copy-source) in multisite environments where bucket content don't
> exist in all zones. This feature is similar to what Matt Benjamin suggested
> about the concept of &
plement and
> works well for bucket migration.
>
> Cheers,
> Yixin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron
r.ceph.com/issues/57489 ). The interim
> workaround to sync existing objects is to either
>
> * create new objects (or)
>
> * execute "bucket sync run"
>
> after creating/enabling the bucket policy.
>
> Please note that this issue is specific to only bucket policy but
> doesn't exist for sync-policy set
tps://hub.docker.com/r/iceyec/ceph-rgw-zipper
>
> Chris
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arb
st -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
___
ubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
t; 4, delete the export:
> ceph nfs export delete nfs4rgw /bucketexport
>
> Ganesha servers go back normal:
> rook-ceph-nfs-nfs1-a-679fdb795-82tcx 2/2 Running
>0 4h30m
> rook-ceph-nfs-nfs4rgw-a-5c594d67dc-nlr42 2/2 Running
&g
And to clarify, too, this Aquarium work is the first attempt by folks to
build a file backed storage setup, it's great to see innovation around this.
Matt
On Thu, Oct 20, 2022 at 1:50 PM Joao Eduardo Luis wrote:
> On 2022-10-20 17:46, Matt Benjamin wrote:
> > The ability to
1] https://github.com/aquarist-labs/s3gw-charts
> [2] https://longhorn.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron
t;
> Kind regards,
> Rok
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
table)": 2
> },
> "osd": {
> "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17)
> pacific (stable)": 108
> },
> "mds": {
> "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe
> If I can't manage the number of versions, then sooner or later the versions
> will kill the entire cluster:(
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
&
nsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-
s mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel.
__
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storag
> > $ s3cmd abortmp s3://test-bucket/tstfiles/tst05
> > 2~1nElF0c3uq5FnZ9cKlsnGlXKATvjr0g
> > ...
>
>
>
> On the latest master, I see that these objects are deleted immediately
> post abortmp. I believe this issue may have beenn fixed as part of [1],
> backport
has done some similar experiment
> in the past.
Not sure, good question.
Matt
>
> Thanks for any help you can provide!
>
> Jorge
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-us
Thank you
> >>
> >> Michal
> >>
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> >
>
> ___
gt; might be a better option.
>
> David
>
> On Fri, May 7, 2021 at 4:21 PM Matt Benjamin wrote:
> >
> > Hi David,
> >
> > I think the solution is most likely the ops log. It is called for
> > every op, and has the transaction id.
> >
> > Ma
e has to be a better way to do this, where the logs
> >> are emitted like the request logs above by beast, so that we can
> >> handle it using journald. If there's an alternative that would
> >> accomplish the same thing, we're very open to suggestions.
> >>
> >> Than
uppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Matt Benjamin
Red Hat, Inc.
315 West
700 1 == req done
> req=0x55a441452710 op status=0 http_status=200 latency=0.022s
> ==
> 2021-04-22 10:27:55.445 7f2d85fd4700 1 beast: 0x55a441452710:
> 10.151.101.15 - - [2021-04-22 10:27:55.0.44549s] "GET
> /descript/2020/01/17/1b819bd9-5036-4ca4-98f7-b0308e1e3017 HT
-- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-821-5101
fax. 734-769-8938
cel. 734-216-5309
_
gt; > David
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> __
h was able to "source" it's data from the OSDs and
> sync that way, then I'd be up for setting up a skeleton implementation, but
> it sounds like RGW Metadata is only going to record things which are flowing
> through the gateway. (Is that correct?)
>
>
>
gt; ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://ww
maintain?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/s
re facing is that the
> secondary cluster is way behind the master cluster because of the relatively
> slow speed.
> - Is there anything else I can do to optimize replication speed ?
>
> Thanks for your comments !
>
> Nicolas
>
> __
ce of entries?
> Should we manually reshard this bucket again?
>
> Thanks!
>
> Dan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann
> awscli client side output during a failed multipart upload:
> > root@jump:~# aws --no-verify-ssl --endpoint-url
> > http://lab-object.cancercollaboratory.org:7480 s3 cp 4GBfile
> > s3://troubleshooting
> > upload failed: ./4GBfile to s3://troubleshooting/4GBfile An error
> > occur
.
> - Delete the index file of the bucket.
>
> Pray to god to not happen again.
>
> Still pending backporting to Nautilus of the new experimental tool to find
> orphans in RGW
>
> Maybe @Matt Benjamin can give us and ETA for get ready that tool backported...
>
> Regar
The lifecycle changes in question do not change the semantics nor any
api of lifecycle. The behavior change was a regression.
regards,
Matt
On Wed, Aug 5, 2020 at 12:12 PM Daniel Poelzleithner wrote:
>
> On 2020-08-05 15:23, Matt Benjamin wrote:
>
> > There is new lifecycle p
very
helpful in identifying the issue.
Matt
On Wed, Aug 5, 2020 at 9:23 AM Matt Benjamin wrote:
>
> Hi Chris,
>
> There is new lifecycle processing logic backported to Octopus, it
> looks like, in 15.2.3. I'm looking at the non-current calculation to
> see if it could inco
},
> >> "ID": "Expiration & incomplete uploads",
> >> "Prefix": "",
> >> "Status": "Enabled",
> >> "NoncurrentVersionExpiration": {
> >> "NoncurrentDays":
55:54.038+0100 7f45adad7700 2 req 15 0.00402s
> s3:multi_object_delete http status=403
> 2020-07-11T17:55:54.038+0100 7f45adad7700 1 == req done
> req=0x7f45adaced50 op status=0 http_status=403 latency=0.00402s ==
> 2020-07-11T17:55:54.038+0100 7f45adad7700 20 process_
S3 cluster got heavy uploads and deletes.
>
> Are those params usable? For us doesn't have sense store delete objects 2
> hours in a gc.
>
> Regards
> Manuel
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To
performance or does it treat the full object "paths" as
> a completely flat namespace?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West H
My keycloak setup does not have any such field in the
>>>> introspection result and I can't seem to figure out how to make this all
>>>> work.
>>>>
>>>> I cranked up the logging to 20/20 and still did not see any hints as to
>>>> w
sages may be intercepted, amended,
> lost or deleted, or contain viruses.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West H
bjects or just let it runs automatically?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan
An issue presenting exactly like this was fixed in spring of last year, for
certain on nautilus and higher.
Matt
On Sat, Apr 11, 2020, 12:04 PM <346415...@qq.com> wrote:
> Ceph Version : Mimic 13.2.4
>
> The cluster has been running steadily for more than a year, recently I
> found cluster usage
ure and the sheer size of the PR -- 22 commits and
> 32 files altered -- my guess is that it will not be backported to Nautilus.
> However I'll invite the principals to weigh in.
>
> Best,
>
> Eric
>
> --
> J. Eric Ivancich
> he/him/his
> Red Hat Storage
> An
.1_darthvader.png
> ce2fc9ee-edc8-4dc7-a3fe-b1458c67168b.5805.1_2019-10-15-090436_1254x522_scrubbed.png
> ce2fc9ee-edc8-4dc7-a3fe-b1458c67168b.5805.1_kanariepiet.jpg
>
> root@node1:~# rados -p tier2-hdd ls
> ce2fc9ee-edc8-4dc7-a3fe-b1458c67168b.5805.1__shadow_.FEruUOZaVJXJcOG-e2tO1xcInNz
his is very
> important to us too.
>
>
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@cep
k to prioritize
> > requests based on some hard-coded request classes. It's not especially
> > useful in its current form, but we do have plans to further elaborate
> > the classes and eventually pass the information down to osds for
> > integrated QOS.
> >
> >
50 matches
Mail list logo