[ceph-users] ceph + openstack: running osd and nova computer on the same node

2013-03-17 Thread Matthieu Patou
Hello, Is this a good idea to run the osd and nova compute on the same node or not so much and if so why ? Matthieu. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] using ssds with ceph

2013-03-17 Thread Matthieu Patou
On 03/17/2013 04:03 PM, Mark Nelson wrote: On 03/17/2013 05:40 PM, Matthieu Patou wrote: Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment our dev machines are served by VMWare and the storage is done in NFS appliance

Re: [ceph-users] Planning for many small files

2013-03-17 Thread Gregory Farnum
On Sunday, March 17, 2013 at 5:49 PM, Yehuda Sadeh wrote: > Not at the moment. We had some discussions about "blind" buckets, it's > definitely on our mind, but we're not there yet. > > Yehuda > > On Sun, Mar 17, 2013 at 3:09 PM, Rustam Aliyev (mailto:rustam.li...@code.az)> wrote: > > Thanks for

Re: [ceph-users] using ssds with ceph

2013-03-17 Thread Gregory Farnum
On Sunday, March 17, 2013 at 4:03 PM, Mark Nelson wrote: > On 03/17/2013 05:40 PM, Matthieu Patou wrote: > > Hello all, > > > > Our dev environment are quite I/O intensive but didn't require much > > space (~20G per dev environment), for the moment our dev machines are > > served by VMWare and th

Re: [ceph-users] Planning for many small files

2013-03-17 Thread Yehuda Sadeh
Not at the moment. We had some discussions about "blind" buckets, it's definitely on our mind, but we're not there yet. Yehuda On Sun, Mar 17, 2013 at 3:09 PM, Rustam Aliyev wrote: > Thanks for detailed explanation. > > Is there any way to disable bucket indexes? We already store index in our >

Re: [ceph-users] SL4500 as a storage machine

2013-03-17 Thread Stas Oskin
> For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5 > inch sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good > enough for both density and computing power. > > This configuration can also hold 12 3.5 drives? What model you use?

Re: [ceph-users] SL4500 as a storage machine

2013-03-17 Thread Stas Oskin
Hi Mark. > The SL4500 series looks like it should be a good option for large > deployments, though you may want to consider going with the 2-node > configuration with 25 drives each. The drive density is a bit lower but > you'll have a better CPU/drive ratio and can get away with much cheaper >

Re: [ceph-users] SL4500 as a storage machine

2013-03-17 Thread Chen, Xiaoxi
For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5 inch sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good enough for both density and computing power. To another end, we are also planning to evaluating small node for ceph,say a ATOM with 2 /4 disks per

Re: [ceph-users] using ssds with ceph

2013-03-17 Thread Mark Nelson
On 03/17/2013 05:40 PM, Matthieu Patou wrote: Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment our dev machines are served by VMWare and the storage is done in NFS appliances with SAS or SATA drives. After some testin

[ceph-users] using ssds with ceph

2013-03-17 Thread Matthieu Patou
Hello all, Our dev environment are quite I/O intensive but didn't require much space (~20G per dev environment), for the moment our dev machines are served by VMWare and the storage is done in NFS appliances with SAS or SATA drives. After some testing with consumer grade SSD we discovered that

Re: [ceph-users] SL4500 as a storage machine

2013-03-17 Thread Mark Nelson
Hi Stas, The SL4500 series looks like it should be a good option for large deployments, though you may want to consider going with the 2-node configuration with 25 drives each. The drive density is a bit lower but you'll have a better CPU/drive ratio and can get away with much cheaper proces

Re: [ceph-users] Planning for many small files

2013-03-17 Thread Rustam Aliyev
Thanks for detailed explanation. Is there any way to disable bucket indexes? We already store index in our Cassandra cluster and need RADOS only to store objects. We don't plan to do any listing operations, only PUT and GET. On 17/03/2013 16:24, Gregory Farnum wrote: RADOS doesn't store a l

[ceph-users] SL4500 as a storage machine

2013-03-17 Thread Stas Oskin
Hi. First of all, nice to meet you, and thanks for the great software! I've thoroughly read the benchmarks on the SuperMicro hardware with and without SSD combinations, and wondered if there were any tests done on HP file server. According to this article: http://www.theregister.co.uk/2012/11/15

Re: [ceph-users] Re-exporting RBD images via iSCSI

2013-03-17 Thread Neil Levine
Very keen to get people to play with Dan's TGT changes so we can get feedback on performance and any bugs. I'd like for us (Inktank) to eventually support this as a blessed piece of the Ceph software. Neli On Sun, Mar 17, 2013 at 6:47 AM, Wido den Hollander wrote: > On 03/16/2013 04:36 PM, Patri

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Andrey Korolyov
On Sun, Mar 17, 2013 at 8:31 PM, Gregory Farnum wrote: > On Sunday, March 17, 2013 at 9:25 AM, Andrey Korolyov wrote: >> On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum > (mailto:g...@inktank.com)> wrote: >> > On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote: >> > > On Sun, Mar 17, 2013

[ceph-users] Status of Mac OS and Windows PC client

2013-03-17 Thread Igor Laskovy
Hi there! Could you please clarify what is the current status of development client for OS X and Windows desktop editions? -- Igor Laskovy facebook.com/igor.laskovy Kiev, Ukraine ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.c

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Gregory Farnum
On Sunday, March 17, 2013 at 9:25 AM, Andrey Korolyov wrote: > On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum (mailto:g...@inktank.com)> wrote: > > On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote: > > > On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum > > (mailto:g...@inktank.com)> wr

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Andrey Korolyov
On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum wrote: > On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote: >> On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum > (mailto:g...@inktank.com)> wrote: >> > On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote: >> > > Hi, >> > > >> > > fr

Re: [ceph-users] Planning for many small files

2013-03-17 Thread Gregory Farnum
RADOS doesn't store a list of objects. The RADOS Gateway uses a separate data format on top of objects stored in RADOS, and it keeps a per-user list of buckets and a per-bucket index of objects as "omap" objects in the OSDs (which ultimately end up in a leveldb store). A bucket index is currentl

Re: [ceph-users] how to enable MDS service in a running Ceph cluster

2013-03-17 Thread Gregory Farnum
On Friday, March 15, 2013 at 2:02 AM, Li, Chen wrote: > I need to create the directory “/var/lib/ceph/mds/mds.$id ”by hand, right ? > > I start the service as you said, and it is succeed. > But, no “mds.$id” directory exist. > Will this affect it working? > > And, what will be installed in the

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Gregory Farnum
On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote: > On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum (mailto:g...@inktank.com)> wrote: > > On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote: > > > Hi, > > > > > > from osd tree: > > > > > > -16 4.95 host 10.5.0.52 > > > 32 1.

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Andrey Korolyov
On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum wrote: > On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote: >> Hi, >> >> from osd tree: >> >> -16 4.95 host 10.5.0.52 >> 32 1.9 osd.32 up 2 >> 33 1.05 osd.33 up 1 >> 34 1 osd.34 up 1 >> 35 1 osd.35 up 1 >> >> df -h: >> /dev/sdd3 3.7T 595G 3

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Gregory Farnum
On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote: > Hi, > > from osd tree: > > -16 4.95 host 10.5.0.52 > 32 1.9 osd.32 up 2 > 33 1.05 osd.33 up 1 > 34 1 osd.34 up 1 > 35 1 osd.35 up 1 > > df -h: > /dev/sdd3 3.7T 595G 3.1T 16% /var/lib/ceph/osd/32 > /dev/sde3 3.7T 332G 3.4T 9% /var/

Re: [ceph-users] Re-exporting RBD images via iSCSI

2013-03-17 Thread Wido den Hollander
On 03/16/2013 04:36 PM, Patrick McGarry wrote: Hey guys, TGT has indeed been patch with the first pass at iSCSI work by Inktanker Dan Mick. This should probably be considered a 'tech preview' as it is quite new. Expect a blog entry to show up on the ceph.com blog in a week or two from Dan about

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Andrey Korolyov
On Sun, Mar 17, 2013 at 4:35 PM, Mark Nelson wrote: > On 03/17/2013 06:46 AM, Andrey Korolyov wrote: >> >> Hi, >> >> from osd tree: >> >> -16 4.95host 10.5.0.52 >> 32 1.9 osd.32 up 2 >> 33 1.05osd.33 u

Re: [ceph-users] Uneven data placement

2013-03-17 Thread Mark Nelson
On 03/17/2013 06:46 AM, Andrey Korolyov wrote: Hi, from osd tree: -16 4.95host 10.5.0.52 32 1.9 osd.32 up 2 33 1.05osd.33 up 1 34 1 osd.34 up 1 35

[ceph-users] Uneven data placement

2013-03-17 Thread Andrey Korolyov
Hi, from osd tree: -16 4.95host 10.5.0.52 32 1.9 osd.32 up 2 33 1.05osd.33 up 1 34 1 osd.34 up 1 35 1 osd.35 up 1

[ceph-users] force create command line

2013-03-17 Thread waed Albataineh
Hi, what does this command line exactly do  : ceph pg force_create_pg 0.c ??? Thank you ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] All PGs stuck stale

2013-03-17 Thread waed Albataineh
Hello, I have make rep value equal to 0 where the least value for rep equal to 1, so i returned rep to the default value 2. When i test the health it still gave me that all my pgs are stuck stale:  HEALTH_WARN 1728 pgs stale; 1728 pgs stuck stale I've checked the OSDs and all are up!!  id