[ceph-users] Cumulative deletion impact

2014-04-10 Thread Andrey Korolyov
Hello, Just a question for users with large and dense clusters - does the multiple simultaneous volume deletion with high commit(tens of gigabytes) affect latency on the Emperor/Firefly still? In other words, delayed deletion from multiple volumes can have cumulative effect, killing the OSD perfor

Re: [ceph-users] Questions about federated gateways configure

2014-04-10 Thread wsnote
Now my configure is normal, but there are still some mistake. Bucket list can rsync, but object not. In the secondary zone, with secondary zone's key, I can't see the bucket list ;But with master zone's key, I can see the bucket list. The log is following: the master zone: Thu, 10 Apr 2014 09:35:3

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Mark Kirkwood
Some more checking: - re-deploying the cluster and testing again - same result (initial 2x space usage). - re-deploying with ext4 for OSD's (instead of default xfs)...*no* 2x space usage observed. Retested several times. So looks like some combination of xfs/kernel/os version (Ubuntu 13.10)

Re: [ceph-users] Ceph v0.79 Firefly RC :: erasure-code-profile command set not present

2014-04-10 Thread Karan Singh
Finally everything worked with ceph version 0.79-125 . I agree with you , version 0.79 does have erasure-code-profile command set , But This mess was due to ceph init script which was missing "/lib/lsb/init-functions” file which is blocking ceph services to get started. Thanks Sage / Alfredo fo

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Russell E. Glaue
I am seeing the same thing, and was wondering the same. We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. ceph version 0.72.2 I am importing a 3.3TB disk image into a rbd image. At 2.6TB, and still importing, 5.197TB is used according to `rados -p df` With previous imag

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Udo Lembke
Hi, On 10.04.2014 20:03, Russell E. Glaue wrote: > I am seeing the same thing, and was wondering the same. > > We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. > ceph version 0.72.2 > > I am importing a 3.3TB disk image into a rbd image. > At 2.6TB, and still importing, 5

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Russell E. Glaue
The `rados -p df` doesn't account for replication? So if it says I have 10TB utilized, I am actually storing 5TB of data? Or in other words, if I have 6TB free, I can only store 3TB more? As I was saying, in regards to Mark's comments, after the image is imported, the difference in utilization r

[ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread cwseys
Hi All, Is there a way to prepare the drives of multiple OSD and then bring them into the CRUSH map all at once? Right now I'm using: ceph-deploy --overwrite-conf disk prepare --zap-disk $NODE:$DEV http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread Gregory Farnum
Sounds like you want to explore the auto-in settings, which can prevent new OSDs from being automatically accepted into the cluster. Should turn up if you search ceph.com/docs. :) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Apr 10, 2014 at 1:45 PM, wrote: > Hi All

[ceph-users] ceph osd reweight cleared on reboot

2014-04-10 Thread Craig Lewis
I've got some OSDs that are nearfull. Hardware is ordered, and I've been using ceph osd reweight (notceph osd crush reweight) to keep the cluster healthy until the new hardware arrives. Is it expected behavior that marking an osd down removes the ceph osd reweight? root@ceph1:~# ceph osd d

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Mark Kirkwood
On 11/04/14 06:35, Udo Lembke wrote: Hi, On 10.04.2014 20:03, Russell E. Glaue wrote: I am seeing the same thing, and was wondering the same. We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. ceph version 0.72.2 I am importing a 3.3TB disk image into a rbd image. At 2

Re: [ceph-users] Questions about federated gateways configure

2014-04-10 Thread Craig Lewis
*Craig Lewis* Senior Systems Engineer Office +1.714.602.1309 Email cle...@centraldesktop.com *Central Desktop. Work together in ways you never thought possible.* Connect with us Website | Twitter

Re: [ceph-users] ceph osd reweight cleared on reboot

2014-04-10 Thread Gregory Farnum
Yes. It's awkward and the whole "two weights" thing needs a bit of UI reworking, but it's expected behavior. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Apr 10, 2014 at 3:59 PM, Craig Lewis wrote: > I've got some OSDs that are nearfull. Hardware is ordered, and I'

[ceph-users] CephFS MDS manual deployment

2014-04-10 Thread Adam Clark
Hey all, I am working through orchestrating the build of various ceph components using Puppet. So far the stackforge puppet-ceph has given me a heap to go on, but it is largely unfinished. I have sorted out the manual procedure for the MONs and OSDs via the documentation but there is scant info

Re: [ceph-users] CephFS MDS manual deployment

2014-04-10 Thread Gregory Farnum
I don't know if there's any formal documentation, but it's a lot simpler than the other components because it doesn't use any local storage (except for the keyring). You basically just need to generate a key and turn it on. Have you set one up by hand before? -Greg On Thursday, April 10, 2014, Ada

Re: [ceph-users] Dell R515/510 with H710 PERC RAID | JBOD

2014-04-10 Thread Punit Dambiwal
Hi, What is the drawback to run the journals on the RAID1...?? My plan is 2 SSD RAID1 (then i will create virtual disks for OS as well as for every OSD).That means one virtual disk of OS and other 24 virtual disks for journals ?? Please suggest me better way to do this ?? On Wed, Apr 9, 20

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread Chad William Seys
Hi Greg, Looks promising... I added [global] ... mon osd auto mark new in = false then pushed config to monitor ceph-deploy --overwrite-conf config push mon01 then restart monitor /etc/init.d/ceph restart mon then tried ceph-deploy --overwrite-conf disk prepare --zap-disk osd02:sde http:/

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread Gregory Farnum
How many monitors do you have? It's also possible that re-used numbers won't get caught in this, depending on the process you went through to clean them up, but I don't remember the details of the code here. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Apr 10, 2014 a

Re: [ceph-users] Dell R515/510 with H710 PERC RAID | JBOD

2014-04-10 Thread Christian Balzer
Hello, On Fri, 11 Apr 2014 09:48:56 +0800 Punit Dambiwal wrote: > Hi, > > What is the drawback to run the journals on the RAID1...?? > Did you read what I wrote below? > My plan is 2 SSD RAID1 (then i will create virtual disks for OS as well > as for every OSD).That means one virtual disk of

Re: [ceph-users] CephFS MDS manual deployment

2014-04-10 Thread Adam Clark
Wow, that was quite simple mkdir /var/lib/ceph/mds/ceph-0 ceph auth get-or-create mds.0 mds 'allow' osd 'allow *' mon 'allow *' > /var/lib/ceph/mds/ceph-0/keyring ceph-mds --id 0 mount -t ceph ceph-mon01:6789:/ /mnt -o name=admin,secret= Can you confirm that I need the caps above for the MDS, o

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread Wido den Hollander
On 04/11/2014 04:01 AM, Chad William Seys wrote: Hi Greg, Looks promising... I added [global] ... mon osd auto mark new in = false Or this one: [osd] osd crush update on start = false That will prevent the OSDs from updating their weight or adding themselves to the crushmap on start

[ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Greg Poirier
Hi, I have about 200 VMs with a common RBD volume as their root filesystem and a number of additional filesystems on Ceph. All of them have stopped responding. One of the OSDs in my cluster is marked full. I tried stopping that OSD to force things to rebalance or at least go to degraded mode, but

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Sage Weil
On Thu, 10 Apr 2014, Greg Poirier wrote: > Hi, > I have about 200 VMs with a common RBD volume as their root filesystem and a > number of additional filesystems on Ceph. > > All of them have stopped responding. One of the OSDs in my cluster is marked > full. I tried stopping that OSD to force thin

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Greg Poirier
Going to try increasing the full ratio. Disk utilization wasn't really growing at an unreasonable pace. I'm going to keep an eye on it for the next couple of hours and down/out the OSDs if necessary. We have four more machines that we're in the process of adding (which doubles the number of OSDs),

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Greg Poirier
One thing to note All of our kvm VMs have to be rebooted. This is something I wasn't expecting. Tried waiting for them to recover on their own, but that's not happening. Rebooting them restores service immediately. :/ Not ideal. On Thu, Apr 10, 2014 at 10:12 PM, Greg Poirier wrote: > Going

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Wido den Hollander
> Op 11 april 2014 om 7:13 schreef Greg Poirier : > > > One thing to note > All of our kvm VMs have to be rebooted. This is something I wasn't > expecting. Tried waiting for them to recover on their own, but that's not > happening. Rebooting them restores service immediately. :/ Not ideal.

[ceph-users] slow request on OSD replacement

2014-04-10 Thread Erwin Lubbers
Hi, We're using a 24 server / 48 OSD (3 replicas) Ceph cluster (version 0.67.3) for RBD storage only and it is working great, but if a failed disk is replaced by a brand new one and the system starts to backfill it gives a lot of slow requests messages for 5 to 10 minutes. Then it does become s

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-10 Thread Josef Johansson
Hi, On 11/04/14 07:29, Wido den Hollander wrote: > >> Op 11 april 2014 om 7:13 schreef Greg Poirier : >> >> >> One thing to note >> All of our kvm VMs have to be rebooted. This is something I wasn't >> expecting. Tried waiting for them to recover on their own, but that's not >> happening. Reb