Re: [ceph-users] features of the next stable release

2015-02-03 Thread mad Engineer
I am also planning to create SSD only cluster using multiple OSD on using few hosts,Whats the best way to get maximum performance out of SSD disks. I dont have the cluster running,but seeing this thread makes me worry that RBD will not be able to extract full capability of SSD disks.I am beginner i

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-03 Thread mad Engineer
Thanks Florent On Mon, Feb 2, 2015 at 11:26 PM, Florent MONTHEL wrote: > Hi, > > Writes will be distributed every 4MB (size of IMAGEV1 RBD object) > IMAGEV2 not fully supported on KRBD (but you can customize size of object > and striping) > > You need to take : > - SSD SATA 6gbits > - or SSD SAS

Re: [ceph-users] Repetitive builds for Ceph

2015-02-03 Thread Ritesh Raj Sarraf
On 02/03/2015 03:50 AM, Mark Kirkwood wrote: > Same here - in the advent you need to rebuild the whole thing, using > parallel make speeds it it heaps (and seems to build it correctly), i.e: > > $ make -j4 > > Cheers > > Mark That it already is doing. If you look at debian/rules, you'll see the B

[ceph-users] Reduce pg_num

2015-02-03 Thread Mateusz Skała
Hi, It is possible to reduce pg_num at not used pool, for example from data or mds. We are using only rbd pool, but pg_num for data and mds is set to 1024. Regards Mateusz ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/lis

Re: [ceph-users] Reduce pg_num

2015-02-03 Thread John Spray
Mateusz, Presumably you've tried deleting the data and metadata pools and found that it refused because they were in use in a filesystem? In that case you can deactivate the filesystem with "fs rm " (identify name from "fs ls"). There was a version (I forget which) where the pools couldn't be de

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread Mudit Verma
Hi Sage, Following are the greps through inotify, I created a file name cris at client end >From stats the ino number is 1099511628795 but it does not match to the hex *06E5F474* On ODS: /var/local/osd1/current/2.0_head/ OPEN *200.0001__head_06E5F474__2* /var/local/osd1/current/2.0_head

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread Sage Weil
On Tue, 3 Feb 2015, Mudit Verma wrote: > Hi Sage,  > Following are the greps through inotify,  I created a file name cris at > client end  > > From stats the ino number is 1099511628795 10003fb The object names will be something like 10003fb.0001. The filenames once they hit disk w

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread Mudit Verma
Great, I found the file which contains the data root@ceph-osd1:/var/local/osd1/current# grep "hello3" * -r 1.0_head/10003fb.__head_4BC00833__1:hello3 And it does indeed match to the hex ino. How does it determined that the data will go to 1.0_head and what does head_4BC00833 stand f

Re: [ceph-users] Reduce pg_num

2015-02-03 Thread Mateusz Skała
Thanks, I doesn't figured out to delete this pools. Btw. I'm on 0.87 release. Mateusz -Original Message- From: john.sp...@inktank.com [mailto:john.sp...@inktank.com] On Behalf Of John Spray Sent: Tuesday, February 3, 2015 10:04 AM To: Mateusz Skała Cc: ceph-users@lists.ceph.com Subject: R

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread Sage Weil
The hash of the filename is ($inode.$block) is 4BC00833, and the pg id is a slightly weird function of those bits. sage On Tue, 3 Feb 2015, Mudit Verma wrote: > Great, I found the file which contains the data  > root@ceph-osd1:/var/local/osd1/current# grep "hello3" * -r > > 1.0_head/10003

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread Mudit Verma
Thanks a lot Sage. On Tue, Feb 3, 2015 at 3:34 PM, Sage Weil wrote: > The hash of the filename is ($inode.$block) is 4BC00833, and the pg id is > a slightly weird function of those bits. > > sage > > > On Tue, 3 Feb 2015, Mudit Verma wrote: > > > Great, I found the file which contains the data >

Re: [ceph-users] cephfs: from a file name determine the objects name

2015-02-03 Thread John Spray
Mudit, Those are the journal objects you're seeing touched. Write some data to the file, and do a "rados -p ls" to check the objects for the inode number you're expecting. Cheers, Johnn On Tue, Feb 3, 2015 at 10:53 AM, Mudit Verma wrote: > Hi Sage, > > Following are the greps through inotify,

[ceph-users] Monitor Restart triggers half of our OSDs marked down

2015-02-03 Thread Christian Eichelmann
Hi all, during some failover tests and some configuration tests, we currently discover a strange phenomenon: Restarting one of our monitors (5 in sum) triggers about 300 of the following events: osd.669 10.76.28.58:6935/149172 failed (20 reports from 20 peers after 22.005858 >= grace 20.00)

Re: [ceph-users] Monitor Restart triggers half of our OSDs marked down

2015-02-03 Thread Andrey Korolyov
On Tue, Feb 3, 2015 at 2:38 PM, Christian Eichelmann wrote: > Hi all, > > during some failover tests and some configuration tests, we currently > discover a strange phenomenon: > > Restarting one of our monitors (5 in sum) triggers about 300 of the > following events: > > osd.669 10.76.28.58:6935/

Re: [ceph-users] features of the next stable release

2015-02-03 Thread Alexandre DERUMIER
Hi, From my tests with giant, this was the cpu which limit the performance on osd. I'm going to do some benchmark with 2x10 cores 3,1ghz for 6ssd next month. I'll post results on the mailing list. - Mail original - De: "mad Engineer" À: "Gregory Farnum" Cc: "ceph-users" Envoyé: Ma

[ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Daniel Schneller
Hi! We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our clients. There are no specific settings configured for /baremetal. As a result, trying to get the directory layout via getfattr does not work getfattr -n 'ceph.dir.layout' /cephfs /cephfs: ceph.dir.layout: No such att

Re: [ceph-users] ceph reports 10x actuall available space

2015-02-03 Thread pixelfairy
turns out i misunderstood the playbook. in "scenario 4" the variable osd_directories refers to premounted partitions, not the directory of the ceph journals. also, unrelated, but might help someone on google, when you use virsh attach-disk, remember to add --persistent like this (--persistent migh

[ceph-users] method to verify replica's actually exist on disk ?

2015-02-03 Thread Stephen Hindle
Hi All, We're going to start some basic testing with ceph - throughput, node failure, and custom crush maps. I'm trying to come up with a testing plan - but I'm sorta stuck with the most basic check: Is there an 'easy' way to take a filename, and determine which OSDs the replica's should be wri

[ceph-users] Ceph Supermicro hardware recommendation

2015-02-03 Thread Colombo Marco
Hi all, I have to build a new Ceph storage cluster, after i‘ve read the hardware recommendations and some mail from this mailing list i would like to buy these servers: OSD: SSG-6027R-E1R12L -> http://www.supermicro.nl/products/system/2U/6027/SSG-6027R-E1R12L.cfm Intel Xeon e5-2630 v2 64 GB RA

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-03 Thread Nick Fisk
Hi, Just a couple of points, you might want to see if you can get a Xeon v3 board+CPU as they have more performance and use less power. You can also get a SM 2U chassis which has 2x 2.5” disk slots at the rear, this would allow you to have an extra 2x 3.5” disks in the front of the serve

[ceph-users] v0.92 released

2015-02-03 Thread Sage Weil
This is the second-to-last chunk of new stuff before Hammer. Big items include additional checksums on OSD objects, proxied reads in the cache tier, image locking in RBD, optimized OSD Transaction and replication messages, and a big pile of RGW and MDS bug fixes. Upgrading - * The experi

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-03 Thread Colombo Marco
Hi Nick, Hi, Just a couple of points, you might want to see if you can get a Xeon v3 board+CPU as they have more performance and use less power. ok You can also get a SM 2U chassis which has 2x 2.5” disk slots at the rear, this would allow you to have an extra 2x 3.5” disks in the front o

Re: [ceph-users] cephfs not mounting on boot

2015-02-03 Thread Daniel Schneller
In the absence of other clues, you might want to try checking that the network is coming up before ceph tries to mount. Now I think on it, that might just be it - I seem to recall a similar problem with cifs mounts, despite having the _netdev option. I had to issue a mount in /etc/network/if-up

[ceph-users] .Health Warning : .rgw.buckets has too few pgs

2015-02-03 Thread Shashank Puntamkar
I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw instance, no RBD. I am facing problem of ".rgw.buckets has too few pgs " I have tried to increased the number of pgs using command "ceph osd pool set pg_num " but in vain. I also tried "ceph osd crush tunables optimal " but no effe

Re: [ceph-users] .Health Warning : .rgw.buckets has too few pgs

2015-02-03 Thread Stephen Hindle
Pardon me if this is a little basic... but did you remember to set the pgp_num on the pool after setting pg_num ? I believe the warning won't go away till that's done. On Tue, Feb 3, 2015 at 9:27 AM, Shashank Puntamkar wrote: > I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw >

Re: [ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-03 Thread Stephen Jahl
I just went through and updated my Ubuntu 14.04 80.7 cluster to 80.8 this morning. The package updates looked like they triggered _starts_ for some services (ceph-all, ceph-msd-all), but not restarts. ceph-mon and ceph-osd services were _not_ restarted, allowing me to restart them in the order I de

Re: [ceph-users] Monitor Restart triggers half of our OSDs marked down

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 3:38 AM, Christian Eichelmann wrote: > Hi all, > > during some failover tests and some configuration tests, we currently > discover a strange phenomenon: > > Restarting one of our monitors (5 in sum) triggers about 300 of the > following events: > > osd.669 10.76.28.58:6935/

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 5:21 AM, Daniel Schneller wrote: > Hi! > > We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our > clients. > There are no specific settings configured for /baremetal. > As a result, trying to get the directory layout via getfattr does not work > > getfatt

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Daniel Schneller
We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our clients. There are no specific settings configured for /baremetal. As a result, trying to get the directory layout via getfattr does not work getfattr -n 'ceph.dir.layout' /cephfs /cephfs: ceph.dir.layout: No such attribute

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 9:23 AM, Daniel Schneller wrote: >>> We have a CephFS directory /baremetal mounted as /cephfs via FUSE on our >>> clients. >>> There are no specific settings configured for /baremetal. >>> As a result, trying to get the directory layout via getfattr does not >>> work >>> >>>

[ceph-users] client unable to access files after caching pool addition

2015-02-03 Thread J-P Methot
Hi, I tried to add a caching pool in front of openstack vms and volumes pools. I believed that the process was transparent, but as soon as I set the caching for both of these pools, the VMs could not find their volumes anymore. Obviously when I undid my changes, everything went back to normal

Re: [ceph-users] client unable to access files after caching pool addition

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 10:23 AM, J-P Methot wrote: > Hi, > > I tried to add a caching pool in front of openstack vms and volumes pools. I > believed that the process was transparent, but as soon as I set the caching > for both of these pools, the VMs could not find their volumes anymore. > Obvious

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Daniel Schneller
On 2015-02-03 18:19:24 +, Gregory Farnum said: Okay, I've looked at the code a bit, and I think that it's not showing you one because there isn't an explicit layout set. You should still be able to set one if you like, though; have you tried that? Actually, no, not yet. We were setting up

Re: [ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-03 Thread Alexandre DERUMIER
debian deb packages update are not restarting services. (So, I think it should be the same for ubuntu). you need to restart daemons in this order: -monitor -osd -mds -rados gateway http://ceph.com/docs/master/install/upgrading-ceph/ - Mail original - De: "Stephen Jahl" À: "Gregory Fa

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread John Spray
On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller wrote: > Now, say I wanted to put /baremetal into a different pool, how would I go > about this? > > Can I setfattr on the /cephfs mountpoint and assign it a different pool with > e. g. different replication settings? This should make it clearer: h

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 1:17 PM, John Spray wrote: > On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller > wrote: >> Now, say I wanted to put /baremetal into a different pool, how would I go >> about this? >> >> Can I setfattr on the /cephfs mountpoint and assign it a different pool with >> e. g. dif

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Daniel Schneller
Understood. Thanks for the details.  Daniel  On Tue, Feb 3, 2015 at 1:23 PM -0800, "Gregory Farnum" wrote: On Tue, Feb 3, 2015 at 1:17 PM, John Spray wrote: > On Tue, Feb 3, 2015 at 2:21 PM, Daniel Schneller > wrote: >> Now, say I wanted to put /baremetal into a different pool, ho

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread John Spray
On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote: >> If you explicitly change the layout of a file containing data to point >> to a different pool, then you will see zeros when you try to read it >> back (although new data will be written to the new pool). > > That statement sounds really sca

Re: [ceph-users] v0.92 released

2015-02-03 Thread Samuel Just
There seems to be a bug with the transaction encoding when upgrading from v0.91 to v0.92. Users probably want to hold off on upgrading to v0.92 until http://tracker.ceph.com/issues/10734 is resolved. -Sam On Tue, Feb 3, 2015 at 7:40 AM, Sage Weil wrote: > This is the second-to-last chunk of new

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread Gregory Farnum
On Tue, Feb 3, 2015 at 1:30 PM, John Spray wrote: > On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote: >>> If you explicitly change the layout of a file containing data to point >>> to a different pool, then you will see zeros when you try to read it >>> back (although new data will be writte

Re: [ceph-users] cephfs-fuse: set/getfattr, change pools

2015-02-03 Thread John Spray
http://tracker.ceph.com/issues/10737 John On Tue, Feb 3, 2015 at 10:36 PM, Gregory Farnum wrote: > On Tue, Feb 3, 2015 at 1:30 PM, John Spray wrote: >> On Tue, Feb 3, 2015 at 10:23 PM, Gregory Farnum wrote: If you explicitly change the layout of a file containing data to point to a d

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-03 Thread Christian Balzer
On Tue, 3 Feb 2015 15:16:57 + Colombo Marco wrote: > Hi all, > I have to build a new Ceph storage cluster, after i‘ve read the > hardware recommendations and some mail from this mailing list i would > like to buy these servers: > Nick mentioned a number of things already I totally agree wit