[ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
Hello, have problem with clearing space for RGW pool with "radosgw-admin temp remove" command: root@osd01:~# ceph -v ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca) root@osd01:~# radosgw-admin temp remove --date=2014-04-26 failed to list objects failure removing temp objects: (2) N

Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Yehuda Sadeh
The temp remove is an obsolete feature that was needed before the introduction of the garbage collector. It's not needed in that version. Yehuda On Sat, Apr 27, 2013 at 6:33 AM, Igor Laskovy wrote: > Hello, > > have problem with clearing space for RGW pool with "radosgw-admin temp > remove" comm

Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
Well, than what about used space of RGW pool? How I can trim it after files deletion? On Sat, Apr 27, 2013 at 6:34 PM, Yehuda Sadeh wrote: > The temp remove is an obsolete feature that was needed before the > introduction of the garbage collector. It's not needed in that > version. > > Yehuda > >

Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Igor Laskovy
I will rephrase my question. When I upload files over s3 the ceph -s return growth in used space, but when this files deleted there are no available space freed. Yehuda, explain please a little bit more about how I can control this behavior ? On Sat, Apr 27, 2013 at 7:09 PM, Igor Laskovy wrote: >

Re: [ceph-users] journal on ramdisk for testing

2013-04-27 Thread Matthieu Patou
On 04/25/2013 12:39 AM, James Harper wrote: I'm doing some testing and wanted to see the effect of increasing journal speed, and the fastest way to do this seemed to be to put it on a ramdisk where latency should drop to near zero and I can see what other inefficiencies exist. I created a tmpf

Re: [ceph-users] Problem with "radosgw-admin temp remove"

2013-04-27 Thread Yehuda Sadeh
Basically you need for the relevant objects to expire, and then wait for the garbage collector to run its course. Expiration is ~2hr from deletion, garbage collector starts every hour, but you can run it manually via 'radosgw-admin gc process'. There are a couple of relevant configurables that can

[ceph-users] RAID 6 with Ceph 0.56.4

2013-04-27 Thread ke_bac tinh
Hi all, I have 1 card raid on server, I use raid 6, then I divided into 4 partitions ,  each partitions corresponding to 1 osd. I have 2 server ==> 8 osd, but when I run ceph services, OSD frequently down.  How can I make reasonable? ThanksMr. Join's Pas ___

[ceph-users] Failed assert when starting new OSDs in 0.60

2013-04-27 Thread Travis Rhoden
Hey folks, I'm helping put together a new test/experimental cluster, and hit this today when bringing the cluster up for the first time (using mkcephfs). After doing the normal "service ceph -a start", I noticed one OSD was down, and a lot of PGs were stuck creating. I tried restarting the down

Re: [ceph-users] RAID 6 with Ceph 0.56.4

2013-04-27 Thread John Wilkins
Mark Kampe gave an excellent presentation on why Ceph may preclude the need for RAID 6, and may provide you with better recovery advantages. Have a look at it here: http://www.youtube.com/watch?v=La0Bxus6Fkg On Sat, Apr 27, 2013 at 5:15 PM, ke_bac tinh wrote: > Hi all, > > I have 1 card raid on