On Fri, Nov 8, 2013 at 5:09 AM, Micha Krause wrote:
> Hi,
>
> I'm trying to set public ACLs to an object, so that I can access the object
> via Web-browser.
> unfortunately without success:
>
> s3cmd setacl --acl-public s3://test/hosts
> ERROR: S3 error: 403 (AccessDenied):
>
> The radosgw log say
On Fri, Nov 8, 2013 at 5:03 AM, wrote:
> All,
>
>
>
> I have configured a rados gateway as per the Dumpling quick instructions on
> a Red Hat 6 server. The idea is to use Swift API to access my cluster via
> this interface.
>
>
>
> Have configured FastCGI, httpd, as per the guides, did all the u
On Fri, Nov 8, 2013 at 6:56 PM, Sebastian Deutsch
wrote:
> Hello,
>
> I've updated ceph from 0.61.4 to 0.72. It went smooth so far ceph status
> gives me a HEALTH_OK.
> Unfortunately starting the radosgw doesn't work anymore:
>
> When I launch:
>
> /usr/bin/radosgw -d -c /etc/ceph/ceph.conf --debu
It's not more dangerous than going through the RESTful interface.
Yehuda
On Wed, Nov 20, 2013 at 12:41 PM, Dominik Mostowiec
wrote:
> Hi,
> I plan to delete 2 buckets, 5M and 15M files.
> This can be dangerous if I do it via:
> radosgw-admin --bucket=largebucket1 --purge-objects bucket rm
> ?
>
On Tue, Nov 26, 2013 at 1:27 PM, Dominik Mostowiec
wrote:
> Hi,
> We have 2 clusters with copy of objects.
> On one of them we splited all large buckets (largest 17mln objects) to
> 256 buckets each (shards) and we have added 3 extra servers (6->9).
> Old bucket was created in ceph argonaut.
> Now
On Wed, Nov 27, 2013 at 4:46 AM, Sebastian wrote:
> Hi,
>
> we have a setup of 4 Servers running ceph and radosgw. We use it as an
> internal S3 service for our files. The Servers run Debian Squeeze with Ceph
> 0.67.4.
>
> The cluster has been running smoothly for quite a while, but we are curre
On Wed, Nov 27, 2013 at 12:24 AM, Mihály Árva-Tóth
wrote:
> 2013/11/26 Derek Yarnell
>>
>> On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
>> > Hello,
>> >
>> > Is there any idea? I don't know this is s3api limitation or missing
>> > feature?
>> >
>> > Thank you,
>> > Mihaly
>>
>> Hi Mihaly,
>>
>>
I just pushed a fix for review for the s3cmd --setacl issue. It should
land a stable release soonish.
Thanks,
Yehuda
On Wed, Nov 27, 2013 at 10:12 AM, Shain Miley wrote:
> Derek,
> That's great...I am hopeful it makes it into the next release too...it will
> solve several issues we are having,
ly, the next one is broken again. But as i said this does not happen for
> all files.
>
> Sebastian
>
> On 27.11.2013, at 21:53, Yehuda Sadeh wrote:
>
>> On Wed, Nov 27, 2013 at 4:46 AM, Sebastian wrote:
>>> Hi,
>>>
>>> we have a setup of 4 Serv
That's unknown bug. I have a guess as to how the original object was
created. Can you read the original object, but only copy fails?
On Dec 2, 2013 4:53 AM, "Dominik Mostowiec"
wrote:
> Hi,
> I found that issue is related with "ETag: -0" (ends -0)
> This is known bug ?
>
> --
> Regards
> Domi
> /testbucket/files/192.txt:complete_multipart:http status=200
> 2013-12-01 11:37:15.679192 7f7891fd3700 1 == req done
> req=0x25406d0 http_status=200 ==
>
> Yes, I can read oryginal object.
>
> --
> Regards
> Dominik
>
> 2013/12/2 Yehuda Sadeh :
>> That
Looks like it. There should be a guard against it (mulitpart upload
minimum is 5M).
On Mon, Dec 2, 2013 at 12:32 PM, Dominik Mostowiec
wrote:
> Yes, this is probably upload empty file.
> This is the problem?
>
> --
> Regards
> Dominik
>
>
> 2013/12/2 Yehuda Sadeh
t; Thanks.
>
> This error should be triggered from radosgw also.
>
> --
> Regards
> Dominik
>
> 2013/12/2 Yehuda Sadeh :
>> Looks like it. There should be a guard against it (mulitpart upload
>> minimum is 5M).
>>
>> On Mon, Dec 2, 2013 at 12:32 PM,
I'm having trouble reproducing the issue. What version are you using?
Thanks,
Yehuda
On Mon, Dec 2, 2013 at 2:16 PM, Yehuda Sadeh wrote:
> Actually, I read that differently. It only says that if there's more
> than 1 part, all parts except for the last one need to be > 5M.
On Mon, Dec 2, 2013 at 4:39 PM, jheryl williams wrote:
> Hello ,
>
> I have been looking all over the net to find a solution to my problem and I
> came across one that you helped someone else with. I was wondering if you
> can assist me as well. I pretty much created a ceph cluster from the step b
I see. Do you have backtrace for the crash?
On Mon, Dec 2, 2013 at 6:19 PM, Dominik Mostowiec
wrote:
> 0.56.7
>
> W dniu poniedziałek, 2 grudnia 2013 użytkownik Yehuda Sadeh napisał:
>
>> I'm having trouble reproducing the issue. What version are you using?
>>
>
fixes in it (wip-6919).
Thanks,
Yehuda
On Mon, Dec 2, 2013 at 8:39 PM, Dominik Mostowiec
wrote:
> for another object.
> http://pastebin.com/VkVAYgwn
>
>
> 2013/12/3 Yehuda Sadeh :
>> I see. Do you have backtrace for the crash?
>>
>> On Mon, Dec 2, 2013 a
gt;
> On Dec 3, 2013 6:43 AM, "Yehuda Sadeh" wrote:
>>
>> I created earlier an issue (6919) and updated it with the relevant
>> issue. This has been fixed in dumpling, although I don't remember
>> hitting the scenario that you did. Was probably hitting
On Tue, Sep 23, 2014 at 7:23 PM, Robin H. Johnson wrote:
> On Tue, Sep 23, 2014 at 03:12:53PM -0600, John Nielsen wrote:
>> Keep Cluster A intact and migrate it to your new hardware. You can do
>> this with no downtime, assuming you have enough IOPS to support data
>> migration and normal usage si
On Tue, Sep 23, 2014 at 4:54 PM, Craig Lewis wrote:
> I've had some issues in my secondary cluster. I'd like to restart
> replication from the beginning, without destroying the data in the secondary
> cluster.
>
> Reading the radosgw-agent and Admin REST API code, I believe I just need to
> stop
imary zone.
Yehuda
>
> Robin, are the mtimes in Cluster B's S3 data important? Just wondering if
> it would be easier to move the data from B to A, and move nodes from B to A
> as B shrinks. Then remove the old A nodes when it's all done.
>
>
> On Tue, Sep 23, 2
On Wed, Sep 24, 2014 at 2:12 PM, Robin H. Johnson wrote:
> On Wed, Sep 24, 2014 at 11:31:29AM -0700, Yehuda Sadeh wrote:
>> On Wed, Sep 24, 2014 at 11:17 AM, Craig Lewis
>> wrote:
>> > Yehuda, are there any potential problems there? I'm wondering if duplicate
>
client
> to the master and everything appears to be replicating without issue.
> Objects have been deleted as well, the sync looks fine, objects are being
> removed from master and slave. I'm pretty sure the large number of orphaned
> "shadow" files that are currently
On Mon, Sep 29, 2014 at 10:44 AM, Lyn Mitchell wrote:
>
>
> Hello ceph users,
>
>
>
> We have a federated gateway configured to replicate between two zones.
> Replication seems to be working smoothly between the master and slave zone,
> however I have a recurring error in the replication log with
On Tue, Sep 23, 2014 at 9:20 AM, Steve Kingsland
wrote:
> Using the S3 API to Object Gateway, let's say that I create an object named
> "/some/path/foo.bar". When I browse this object in Ceph using a graphical S3
> client, "some" and "path" show up as directories. I realize that they're not
> actu
The agent itself only goes to the gateways it was configured to use.
However, in a cross zone copy of objects, the gateway will round robin
to any of the specified endpoints in its regionmap.
Yehuda
On Wed, Oct 1, 2014 at 3:46 PM, Lyn Mitchell wrote:
> Sorry all for the typo. The master in zon
gins on startup (Loic Dachary)
> * osd: prevent PGs from falling behind when consuming OSDMaps (#7576 Sage
> Weil)
> * osd: prevent old clients from using tiered pools (#8714 Sage Weil)
> * osd: set min_size on erasure pools to data chunk count (Sage Weil)
> * osd: trim old erasure-co
It'd be interesting to see which rados operation is slowing down the
requests. Can you provide a log dump of a request (with 'debug rgw =
20', and 'debug ms = 1'). This might give us a better idea as to
what's going on.
Thanks,
Yehuda
On Mon, Oct 6, 2014 at 10:05 AM, Daniel Schneller
wrote:
> Hi
ching permissions for group=2 mask=50
> 36.748063 5 Permissions for group not found
> 36.748064 5 Getting permissions id=documentstore owner=documentstore
> perm=2
> 36.748066 10 uid=documentstore requested perm (type)=2, policy perm=2,
> user_perm_mask=2, acl perm=2
> 36.748069 2 req 983
> traffic log intact. Due to the increased verbosity, I will not post
> the logs inline, but only attach them gzipped.
>
> As before, should the full data set be needed, I can provide
> an archived version.
>
>
>
>
> Thanks for your support!
> Daniel
>
>
>
&
Try passing in 'Server-Port-Secure: 443' header to the auth request.
Yehuda
On Wed, Oct 8, 2014 at 7:41 AM, Marco Garcês wrote:
> Hi David,
>
> I am indeed using Tengine 2.0.3, but I feel very strange that the
> default config is returning X-Storage-Url in the headers, in http, not
> https as th
age-Url: http://gateway.local:443/swift/v1
> X-Storage-Token:
> AUTH_rgwtk100066726f6e74656e643a737766303030323daad73c8234e91dfba33654a8ca962d64f0f2d492b4ec5b79aee87ac454bd38406d3bee
> X-Auth-Token:
> AUTH_rgwtk100066726f6e74656e643a737766303030323daad73c8234e91dfba33654a8ca962d64f0f2d492b4ec5b79aee87ac454bd38406d3bee
>
>
>
> On Wed, Oct
301a0b53654f8d73f09
> 2014-10-08 18:19:44.155760 7f90e97fa700 2 req
> 2:0.004071:swift-auth:GET /auth:swift_auth_get:http status=204
> 2014-10-08 18:19:44.155771 7f90e97fa700 1 == req done
> req=0x1b9e400 http_status=204 ==
> 2014-10-08 18:19:44.155779 7f90e97fa
0
>
> wr: 0
>
> wr KB: 0
>
>
> .rgw.gc
>
> =
>
> KB: 0
>
> objects: 32
>
> rd: 5,554,407
>
> rd KB: 5,713,942
>
> wr: 8,355,934
>
> w
On Wed, Oct 8, 2014 at 10:00 AM, Daniel Schneller
wrote:
> Ok. How can I tell if stuff is stuck in a queue?
> What to look for?
Correlate a slow request/response that you see in the rgw log to the
same in the osd log. Check how much time it the osd thought it took to
process. If there's a huge d
I have a trivial fix for the issue that I'd like to check and get this
one cleared, but never got to it due to some difficulties with a
proper keystone setup in my environment. If you can and would like to
test it so that we could get it merged it would be great.
Thanks,
Yehuda
On Wed, Oct 8, 201
a,
> Please share the fix/patch, we could test and confirm the fix status.
>
> Thanks
> Swami
>
> On Thu, Oct 9, 2014 at 10:42 PM, Yehuda Sadeh wrote:
>> I have a trivial fix for the issue that I'd like to check and get this
>> one cleared, but never got to it du
atched radosgw binary and restarting got back a
> working swift.
>
> Cheers
>
> Mark
>
>
> On 10/10/14 07:19, Yehuda Sadeh wrote:
>>
>> Here's the fix, let me know if you need any help with that.
>>
>> Thanks,
>> Yehuda
>>
>&g
See this discussion:
http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4992
Yehuda
On Thu, Oct 16, 2014 at 12:11 AM, Shashank Puntamkar
wrote:
> I am planning to use ceph object gateway to store data in ceph
> cluster.I need two different users of Rados gateway to store data in
> dif
On Thu, Oct 23, 2014 at 3:51 PM, Craig Lewis wrote:
> I'm having a problem getting RadosGW replication to work after upgrading to
> Apache 2.4 on my primary test cluster. Upgrading the secondary cluster to
> Apache 2.4 doesn't cause any problems. Both Ceph's apache packages and
> Ubuntu's package
On Fri, Oct 24, 2014 at 8:17 AM, Dane Elwell wrote:
> Hi list,
>
> We're using the object storage in production and billing people based
> on their usage, much like S3. We're also trying to produce things like
> hourly bandwidth graphs for our clients.
>
> We're having some issues with the API not
On Tue, Oct 28, 2014 at 2:23 PM, Pedro Miranda wrote:
> Hi I'm new using Ceph and I have a very basic Ceph cluster with 1 mon in one
> node and 2 OSDs in two separate nodes (all CentOS 7). I followed the
> quick-ceph-deploy tutorial.
> All went well.
>
> Then I started the quick-rgw tutorial. I in
On Fri, Oct 31, 2014 at 3:59 AM, Dane Elwell wrote:
> Hello list,
>
> When we upload a large multipart upload to RGW and it fails, we want
> to abort the upload. On large multipart uploads, with say 1000+ parts,
> it will consistently return 500 errors when trying to abort the
> upload. If you per
On Fri, Oct 31, 2014 at 8:06 AM, Dane Elwell wrote:
> I think I may have answered my own question:
>
> http://tracker.ceph.com/issues/8553
>
> Looks like this is fixed in Giant, which we'll be deploying as soon as
> 0.87.1 is out ;)
>
> Thanks
>
> Dane
>
> On 31 October 2014 09:08, Dane Elwell wr
On Fri, Oct 31, 2014 at 9:48 AM, Marco Garcês wrote:
> Hi there,
>
> I have a few questions regarding pools, radosgw and logging:
>
> 1) How do I turn on radosgw logs for a specific pool?
What do you mean? What do you want to log?
> I have this in my config:
>
> rgw enable ops log = false
This
On Mon, Nov 3, 2014 at 9:37 AM, Narendra Trivedi (natrived)
wrote:
> Thanks. I think the limit is 100 by default and it can be disabled. As far
> as I understand, there are no object limit on radosgw side of things only
> from Swift end (i.e. 5GB) ….right? In short, if someone tries to upload a
>
On Wed, Nov 5, 2014 at 2:08 PM, lakshmi k s wrote:
> Hello -
>
> My ceph cluster needs to have two rados gateway nodes eventually interfacing
> with Openstack haproxy. I have been successful in bringing up one of them.
> What are the steps for additional rados gateway node to be included in
> clus
Also, if that doesn't help, look at the following configurables:
config_opts.h:OPTION(rgw_gc_processor_max_time, OPT_INT, 3600) //
total run time for a single gc processor work
config_opts.h:OPTION(rgw_gc_processor_period, OPT_INT, 3600) // gc
processor cycle time
You may want to reduce the gc
On Sat, Nov 15, 2014 at 6:20 AM, Wido den Hollander wrote:
> Hi,
>
> I'm having trouble with creating a new user using the Admin Ops API and
> I'm not sure where the problem lies.
>
> I'm using: http://eu.ceph.com/docs/master/radosgw/adminops/#create-user
>
> Using pycurl I send the requesting usi
On Sun, Nov 16, 2014 at 10:50 PM, Wido den Hollander wrote:
> On 17-11-14 07:44, Lei Dong wrote:
>> I think you should send the data (uid & display-name) as arguments. I
>> successfully create user via adminOps without any problems.
>>
>
> To be clear:
>
> PUT /admin/user?format=json&uid=XXX&displ
On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
wrote:
> On 21/11/14 14:49, Mark Kirkwood wrote:
>>
>>
>> The only things that look odd in the destination zone logs are 383
>> requests getting 404 rather than 200:
>>
>> $ grep "http_status=404" ceph-client.radosgw.us-west-1.log
>> ...
>> 2014-11-21
On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood
wrote:
> On 22/11/14 10:54, Yehuda Sadeh wrote:
>>
>> On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood
>> wrote:
>
>
>>> Fri Nov 21 02:13:31 2014
>>>
>>> x-amz-copy-source:bucketbig/_mu
On Wed, Nov 26, 2014 at 2:32 PM, b wrote:
> I've been deleting a bucket which originally had 60TB of data in it, with
> our cluster doing only 1 replication, the total usage was 120TB.
>
> I've been deleting the objects slowly using S3 browser, and I can see the
> bucket usage is now down to aroun
On Wed, Nov 26, 2014 at 3:09 PM, b wrote:
> On 2014-11-27 09:38, Yehuda Sadeh wrote:
>>
>> On Wed, Nov 26, 2014 at 2:32 PM, b wrote:
>>>
>>> I've been deleting a bucket which originally had 60TB of data in it, with
>>> our cluster doing only 1 repli
On Wed, Nov 26, 2014 at 3:49 PM, b wrote:
> On 2014-11-27 10:21, Yehuda Sadeh wrote:
>>
>> On Wed, Nov 26, 2014 at 3:09 PM, b wrote:
>>>
>>> On 2014-11-27 09:38, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Wed, Nov 26, 2014 at 2
On Thu, Nov 27, 2014 at 2:15 PM, b wrote:
> On 2014-11-27 11:36, Yehuda Sadeh wrote:
>>
>> On Wed, Nov 26, 2014 at 3:49 PM, b wrote:
>>>
>>> On 2014-11-27 10:21, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Wed, Nov 26, 2014 at 3
On Thu, Nov 27, 2014 at 9:22 PM, Ben wrote:
> On 2014-11-28 15:42, Yehuda Sadeh wrote:
>>
>> On Thu, Nov 27, 2014 at 2:15 PM, b wrote:
>>>
>>> On 2014-11-27 11:36, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Wed, Nov 26, 2014 at 3
On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
>
> On 29/11/14 01:50, Yehuda Sadeh wrote:
>>
>> On Thu, Nov 27, 2014 at 9:22 PM, Ben wrote:
>>>
>>> On 2014-11-28 15:42, Yehuda Sadeh wrote:
>>>>
>>>> On Thu, Nov 27, 2014 at 2:15 PM, b
On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
>
> On 29/11/14 11:40, Yehuda Sadeh wrote:
>>
>> On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
>>>
>>> On 29/11/14 01:50, Yehuda Sadeh wrote:
>>>>
>>>> On Thu, Nov 27, 2014 at 9:22 PM, Ben
On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
> On 2014-12-02 08:39, Yehuda Sadeh wrote:
>>
>> On Sat, Nov 29, 2014 at 2:26 PM, Ben wrote:
>>>
>>>
>>> On 29/11/14 11:40, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Fri, Nov 28,
On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
> On 2014-12-02 09:25, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 2:10 PM, Ben wrote:
>>>
>>> On 2014-12-02 08:39, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Sat, Nov 29, 2014 at 2
On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
> On 2014-12-02 11:21, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
>>>
>>> On 2014-12-02 10:40, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Mon, Dec 1, 2014 at 3:2
On Mon, Dec 1, 2014 at 3:47 PM, Ben wrote:
> On 2014-12-02 10:40, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 3:20 PM, Ben wrote:
>>>
>>> On 2014-12-02 09:25, Yehuda Sadeh wrote:
>>>>
>>>>
>>>> On Mon, Dec 1, 2014 at 2:1
On Mon, Dec 1, 2014 at 4:26 PM, Ben wrote:
> On 2014-12-02 11:25, Yehuda Sadeh wrote:
>>
>> On Mon, Dec 1, 2014 at 4:23 PM, Ben wrote:
...
>>>>>>> How can I tell if the shard has an object in it from the logs?
>>>>>>
>>>>>>
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks,
Yehuda
On Fri, Dec 5, 2014 at 7:17 AM, Georgios Dimitrakakis
wrote:
> It would be nice to see where and how "uploadId"
>
> is being calculated...
>
>
> Thanks,
>
>
> George
>
>
>
>> For example if I
On Sat, Dec 6, 2014 at 10:39 AM, Sage Weil wrote:
> Several things are different/annoying with radosgw than with other Ceph
> daemons:
>
> - binary/package are named 'radosgw' instead of 'ceph-rgw'.
>
> This is cosmetic, but it also makes it fit less well into the
> new /var/lib/ceph/* view of thi
0.5
>
> Have I missed something?
>
> Regards,
>
> George
>
>
>
>> Pushed a fix to wip-10271. Haven't tested it though, let me know if
>> you try it.
>>
>> Thanks,
>> Yehuda
>>
>> On Thu, Dec 11, 2014 at 8:38 AM, Yehuda Sadeh
eorge
>
>
>
>
>
> On Mon, 08 Dec 2014 19:47:59 +0200, Georgios Dimitrakakis wrote:
>>
>> I 've just created issues #10271
>>
>> Best,
>>
>> George
>>
>> On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
>>>
>>&
Pushed a fix to wip-10271. Haven't tested it though, let me know if you try it.
Thanks,
Yehuda
On Thu, Dec 11, 2014 at 8:38 AM, Yehuda Sadeh wrote:
> I don't think it has been fixed recently. I'm looking at it now, and
> not sure why it hasn't triggered before in othe
>>>> It is
>>>> : implemented as a FastCGI module using libfcgi, and can be
>>>> used
>>>> in
>>>> : conjunction with any FastCGI capable web server.
>>>>
>>>> Available Packages
>>>> Name
earlier was based off recent development branch. I
>>>>>> just pushed one based off firefly (wip-10271-firefly). It will
>>>>>> probably take a bit to build.
>>>>>>
>>>>>> Yehuda
>>>>>>
>>>>>> On T
There's the 'radosgw-agent' package for debian, e.g., here:
http://ceph.com/debian-giant/pool/main/r/radosgw-agent/radosgw-agent_1.2-1~bpo70+1_all.deb
On Mon, Dec 15, 2014 at 5:12 AM, lakshmi k s wrote:
> Hello -
>
> Can anyone help me locate the Debian-type source packages for radosgw-agent?
>
>
On Thu, Dec 18, 2014 at 11:24 AM, Gregory Farnum wrote:
> On Thu, Dec 18, 2014 at 4:04 AM, Daniele Venzano wrote:
>> Hello,
>>
>> I have been trying to upload multi-gigabyte files to CEPH via the object
>> gateway, using both the swift and s3 APIs.
>>
>> With file up to about 2GB everything works
On Sat, Nov 22, 2014 at 12:47 AM, Vinod H I wrote:
> Thanks for the clarification.
> Now I have done exactly as you suggested.
> "us-east" is the master zone and "us-west" is the secondary zone.
> Each zone has two system users "us-east" and "us-west".
> These system users have same access/secret
I created a ceph tracker issue:
http://tracker.ceph.com/issues/10471
Thanks,
Yehuda
On Tue, Jan 6, 2015 at 10:19 PM, Mark Kirkwood
wrote:
> On 07/01/15 17:43, hemant burman wrote:
>>
>> Hello Yehuda,
>>
>> The issue seem to be with the user data file for swift subser not
>> getting synced prope
Sorry for the late response, been backed up with other issues. It
certainly looks like a promising lead, I'll take a closer look at it.
Thanks!
Yehuda
On Fri, Jan 9, 2015 at 1:05 AM, baijia...@126.com wrote:
> I patch the http://tracker.ceph.com/issues/8452
> run s3 test suite and still is error
Try setting 'rgw print continue = false' in your ceph.conf.
Yehuda
On Thu, Jan 8, 2015 at 1:34 AM, Walter Valenti wrote:
> Scenario:
> Openstack Juno RDO on Centos7.
> Ceph version: Giant.
>
> On Centos7 there isn't more the old fastcgi,
> but there's "mod_fcgid"
>
>
>
> The apache VH is the fol
On Tue, Jan 6, 2015 at 1:21 AM, Liu, Xuezhao wrote:
> Hello,
>
>
>
> I am new to ceph and have a problem about ceph object gateway usage, did not
> find enough hints by googling it, so send an email here, thanks.
>
>
>
> I have a ceph server with object gateway configured, and another client node
2015-01-15 1:08 GMT-08:00 Walter Valenti :
>
>
>
>
>
>
> - Messaggio originale -
>> Da: Yehuda Sadeh
>> A: Walter Valenti
>> Cc: "ceph-users@lists.ceph.com"
>> Inviato: Martedì 13 Gennaio 2015 1:13
>> Oggetto: Re: [ceph-u
On Wed, Jan 14, 2015 at 7:27 PM, Liu, Xuezhao wrote:
> Thanks for the replying.
>
> After disable the default site (a2dissite 000-default), I can use libs3's
> commander s3 to create/list bucket, get object also works.
>
> But put object failed:
>
> root@xuezhaoUbuntu74:~# s3 -u put bucket11/seqd
I think you're hitting issue #10271. It has been fixed, but not in the
a formal firefly release yet. You can try picking up the unofficial
firefly branch package off the ceph gitbuilder and test it.
Yehuda
On Wed, Jan 21, 2015 at 11:37 AM, Castillon de la Cruz, Eddy Gonzalo
wrote:
>
> Hello Team
On Wed, Jan 21, 2015 at 7:24 PM, Mark Kirkwood
wrote:
> I've been looking at the steps required to enable (say) multi region
> metadata sync where there is an existing RGW that has been in use (i.e non
> trivial number of buckets and objects) which been setup without any region
> parameters.
>
> N
Also, one more point to consider. A bucket that was created at the
default region before a region was set is considered to belong to the
master region.
Yehuda
On Fri, Jan 23, 2015 at 8:40 AM, Yehuda Sadeh wrote:
> On Wed, Jan 21, 2015 at 7:24 PM, Mark Kirkwood
> wrote:
>> I'v
On Wed, Jan 28, 2015 at 8:04 PM, Mark Kirkwood
wrote:
> On 29/01/15 13:58, Mark Kirkwood wrote:
>>
>>
>> However if I
>> try to write to eu-west I get:
>>
>
> Sorry - that should have said:
>
> However if I try to write to eu-*east* I get:
>
> The actual code is (see below) connecting to the endpo
How does your regionmap look like? Is it updated correctly on all zones?
On Thu, Jan 29, 2015 at 1:42 PM, Mark Kirkwood
wrote:
> On 30/01/15 06:31, Yehuda Sadeh wrote:
>>
>> On Wed, Jan 28, 2015 at 8:04 PM, Mark Kirkwood
>> wrote:
>>>
>>>
On Thu, Jan 29, 2015 at 3:27 PM, Mark Kirkwood
wrote:
> On 30/01/15 11:08, Yehuda Sadeh wrote:
>>
>> How does your regionmap look like? Is it updated correctly on all zones?
>>
>
> Regionmap listed below - checking it on all 4 zones produces exactly the
&g
I assume that the problem is not with the object itself, but with one
of the upload mechanism (either client, or rgw, or both). I would be
curious, however, to see if a different S3 client (not the homebrew
one) could upload the object correctly using multipart upload.
Yehuda
On Thu, Jan 29, 2015
On Tue, Jan 20, 2015 at 5:15 PM, Gleb Borisov wrote:
> Hi,
>
> We're experiencing some issues with our radosgw setup. Today we tried to
> copy bunch of objects between two separate clusters (using our own tool
> built on top of java s3 api).
>
> All went smooth until we start copying large objects
I'm having trouble reproducing this one. Are you running on latest
dumpling? Does it happen with any newly created bucket, or just with
buckets that existed before?
Yehuda
On Fri, Dec 6, 2013 at 5:07 AM, Dominik Mostowiec
wrote:
> Hi,
> In version dumpling upgraded from bobtail working create th
On Fri, Dec 6, 2013 at 1:45 AM, Gao, Wei M wrote:
> Hi all,
>
>
>
> I am working on the ceph radosgw(v0.72.1) and when I call the rest api to
> read the bucket policy, I got an internal server error(request URL is:
> /admin/bucket?policy&format=json&bucket=test.).
>
> However, when I call this:
>
090a2b3f
>health HEALTH_OK
>
> ceph -v
> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>
> I have strange behavior after cluster create.
> All PGs were on osd.0 and marked as stale degraded.
> After add more osd's it didn't ifxed.
> "ceph pg fo
.174.33.13:6800/296091 -- osd_op(client.95391.0:127
> .bucket.meta.test1:default.78189.1 [call version.check_conds,call
> version.read,getxattrs,stat] 4.50558ec5 e192) v4 -- ?+0 0x7ff79400c6e0
> con 0xd8f710
> 2013-12-07 17:32:42.764173 7ffbd96ec700 1 -- 10.174.33.11:0/1270294
> <==
s, it is disabled
>> grep 'cache' /etc/ceph/ceph.conf | grep rgw
>> rgw_cache_enabled = false ;rgw cache enabled
>> rgw_cache_lru_size = 1 ;num of entries in rgw cache
>>
>> --
>> Regards
>> Dominik
>>
>> 2013/1
How did you cancel the uploads? Note that gc entries are not going to
show immediately in the gc list, only after some period. Also, not
sure if rados df counts the entries in omap, where all the gc data
resides.
Yehuda
On Thu, Dec 12, 2013 at 4:32 PM, Joel van Velden wrote:
> In a similar probl
For some reason your bucket list seem to be returning some non-bucket
metadata info. Sounds like there's a mixup in the pools. What does
radosgw-admin zone get (for the us-west zone) return? What's your 'rgw
zone root pool' and 'rgw region root pool'?
Yehuda
On Sun, Dec 15, 2013 at 9:03 PM, wro
his non-bucket metadata info ??
If you delete it you'd lose your zone and region configuration. Note
that you can use the region root pool for that purpose. So first copy
the relevant objects, e.g.,:
$ rados -p .us-west.rgw.root --target-pool=.us.rgw.root cp zone_info.us-west
and then you ca
On Tue, Dec 17, 2013 at 6:27 AM, raj kumar wrote:
> Hi,
>
> I followed inst mentioned in ceph radosgw setup.
>
> I used swift python script to access it. I'm getting error like,
>
> Traceback (most recent call last):
> File "a.py", line 8, in
> authurl='http://192.168.211.70/auth',
> File
974400c6dd6ca71904/source.avi is the one that stalled.
>>>>
>>>>> How much are you loading the gateway before that happens? We've seen
>>>>> a similar issue in the past that was related to the fcgi library
>>>>> that is dynamicall
On Tue, Dec 24, 2013 at 8:16 AM, Kuo Hugo wrote:
> Hi folks,
>
> After some more tests. I can not addressed the bottleneck currently. Never
> hit CPU bound.
>
>
> OSD op threads : 60
> rgw_thread_pool_size : 300
> pg = 2000
> pool size = 3
>
> Try to find the max concurrency of 1KB write of this c
On Wed, Dec 25, 2013 at 9:12 AM, Kuo Hugo wrote:
> Hi folks,
>
> I'm in progress to tune the performance of RadosGW on my server. After some
> kindly helps from you guys. I figure out several problems for optimizing the
> RadosGW to handle higher concurrency requests from users.
>
> Apache optimiz
101 - 200 of 533 matches
Mail list logo