[ceph-users] Don't allow user to create buckets but can read in radosgw

2014-05-11 Thread Thanh Tran
Hi, There are any way to set permission that don't allow users to create buckets but they can read these buckets? and how to add prefix to a bucket when users create one, ex: when user 'aa' create a bucket called 'testing', bucket name will be 'aatesting'? Best regards, Thanh Tran ___

Re: [ceph-users] [OFF TOPIC] Deep Intellect - Inside the mind of the octopus

2014-05-11 Thread Amit Vijairania
Everyone involved with Ceph must be curious about Cephalopods.. Very interesting article.. http://www.orionmagazine.org/index.php/articles/article/6474/ Amit Vijairania --*-- ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.co

[ceph-users] [OFF TOPIC] Deep Intellect - Inside the mind of the octopus

2014-05-11 Thread Amit Vijairania
Everyone involved with Ceph must be curious about Cephalopods.. Very interesting article.. http://www.orionmagazine.org/index.php/articles/article/6474/ - Amit Vijairania ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/lis

Re: [ceph-users] Replace journals disk

2014-05-11 Thread Indra Pramana
Hi Gandalf and all, FYI, I checked sgdisk's man page and it seems that the correct command to restore should be: sgdisk --load-backup=/tmp/journal_table /dev/sdg Will try this next weekend and update again. Thank you. On Sat, May 10, 2014 at 10:58 PM, Indra Pramana wrote: > Hi Gandalf, >

[ceph-users] fixing degraded PGs

2014-05-11 Thread Kei.masumoto
Hi, I built a new cluster followed the tutorial "http://ceph.com/docs/master/start/";. Then I got bunch of PGs degraded. ceph-osd1:~# ceph -s cluster 00f2b37f-ccfd-4569-b27d-8ddcce62573d health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean monmap e1: 1 mons at {ceph-mon1=192.1

Re: [ceph-users] NFS over CEPH - best practice

2014-05-11 Thread Leen Besselink
On Sun, May 11, 2014 at 09:24:30PM +0100, Andrei Mikhailovsky wrote: > Sorry if these questions will sound stupid, but I was not able to find an > answer by googling. > As the Astralians say: no worries, mate. It's fine. > 1. Does iSCSI protocol support having multiple target servers to serve

Re: [ceph-users] NFS over CEPH - best practice

2014-05-11 Thread Andrei Mikhailovsky
Sorry if these questions will sound stupid, but I was not able to find an answer by googling. 1. Does iSCSI protocol support having multiple target servers to serve the same disk/block device? In case of ceph, the same rbd disk image. I was hoping to have multiple servers to mount the same r

Re: [ceph-users] ceph-noarch firefly repodata

2014-05-11 Thread Alfredo Deza
No reason, this is obviously something that needs some fixing. Created an issue to track this problem and fix it tracker.ceph.com/issues/8330 On Sun, May 11, 2014 at 10:56 AM, Simon Ironside wrote: > Hi there, > > Is there any reason not to use the latest packages from: > ceph.com/rpm-firefly/rh

[ceph-users] ceph-noarch firefly repodata

2014-05-11 Thread Simon Ironside
Hi there, Is there any reason not to use the latest packages from: ceph.com/rpm-firefly/rhel6/noarch/ ? I.e. when installing via yum, ceph-deploy-1.4.0 is installed but 1.5.0, 1.5.1 and 1.5.2 are present in the directory above. Yum also complains about radosgw-agent-1.2.0-0.noarch.rpm not bei

Re: [ceph-users] Info firefly qemu rbd

2014-05-11 Thread Federico Iezzi
Sorry guys for the delay. BTW: Yes it seems a Libvirt bug but the seem environment with Emperor and it works. I didn’t fix the problem and I rolled-back to Emperor. All my system are managed by puppet and I can consider that as pre-production system. Regards, Federico Il giorno 08/mag/2014, a

Re: [ceph-users] v0.80 Firefly released

2014-05-11 Thread Sergey Malinin
> > # ceph tell mon.* injectargs '--mon_osd_allow_primary_affinity true' > > Ignore the "mon.a: injectargs: failed to parse arguments: true" > warnings, this appears to be a bug [0]. > > It will work this way: ceph tell mon.* injectargs -- --mon_osd_allow_primary_affinity=true _