Hello,
Is this a good idea to run the osd and nova compute on the same node or
not so much and if so why ?
Matthieu.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 03/17/2013 04:03 PM, Mark Nelson wrote:
On 03/17/2013 05:40 PM, Matthieu Patou wrote:
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment our dev machines are
served by VMWare and the storage is done in NFS appliance
On Sunday, March 17, 2013 at 5:49 PM, Yehuda Sadeh wrote:
> Not at the moment. We had some discussions about "blind" buckets, it's
> definitely on our mind, but we're not there yet.
>
> Yehuda
>
> On Sun, Mar 17, 2013 at 3:09 PM, Rustam Aliyev (mailto:rustam.li...@code.az)> wrote:
> > Thanks for
On Sunday, March 17, 2013 at 4:03 PM, Mark Nelson wrote:
> On 03/17/2013 05:40 PM, Matthieu Patou wrote:
> > Hello all,
> >
> > Our dev environment are quite I/O intensive but didn't require much
> > space (~20G per dev environment), for the moment our dev machines are
> > served by VMWare and th
Not at the moment. We had some discussions about "blind" buckets, it's
definitely on our mind, but we're not there yet.
Yehuda
On Sun, Mar 17, 2013 at 3:09 PM, Rustam Aliyev wrote:
> Thanks for detailed explanation.
>
> Is there any way to disable bucket indexes? We already store index in our
>
> For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5
> inch sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good
> enough for both density and computing power.
>
>
This configuration can also hold 12 3.5 drives? What model you use?
Hi Mark.
> The SL4500 series looks like it should be a good option for large
> deployments, though you may want to consider going with the 2-node
> configuration with 25 drives each. The drive density is a bit lower but
> you'll have a better CPU/drive ratio and can get away with much cheaper
>
For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5 inch
sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good enough for
both density and computing power.
To another end, we are also planning to evaluating small node for ceph,say a
ATOM with 2 /4 disks per
On 03/17/2013 05:40 PM, Matthieu Patou wrote:
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment our dev machines are
served by VMWare and the storage is done in NFS appliances with SAS or
SATA drives.
After some testin
Hello all,
Our dev environment are quite I/O intensive but didn't require much
space (~20G per dev environment), for the moment our dev machines are
served by VMWare and the storage is done in NFS appliances with SAS or
SATA drives.
After some testing with consumer grade SSD we discovered that
Hi Stas,
The SL4500 series looks like it should be a good option for large
deployments, though you may want to consider going with the 2-node
configuration with 25 drives each. The drive density is a bit lower but
you'll have a better CPU/drive ratio and can get away with much cheaper
proces
Thanks for detailed explanation.
Is there any way to disable bucket indexes? We already store index in
our Cassandra cluster and need RADOS only to store objects. We don't
plan to do any listing operations, only PUT and GET.
On 17/03/2013 16:24, Gregory Farnum wrote:
RADOS doesn't store a l
Hi.
First of all, nice to meet you, and thanks for the great software!
I've thoroughly read the benchmarks on the SuperMicro hardware with and
without SSD combinations, and wondered if there were any tests done on HP
file server.
According to this article:
http://www.theregister.co.uk/2012/11/15
Very keen to get people to play with Dan's TGT changes so we can get
feedback on performance and any bugs. I'd like for us (Inktank) to
eventually support this as a blessed piece of the Ceph software.
Neli
On Sun, Mar 17, 2013 at 6:47 AM, Wido den Hollander wrote:
> On 03/16/2013 04:36 PM, Patri
On Sun, Mar 17, 2013 at 8:31 PM, Gregory Farnum wrote:
> On Sunday, March 17, 2013 at 9:25 AM, Andrey Korolyov wrote:
>> On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum > (mailto:g...@inktank.com)> wrote:
>> > On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote:
>> > > On Sun, Mar 17, 2013
Hi there!
Could you please clarify what is the current status of development client
for OS X and Windows desktop editions?
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
On Sunday, March 17, 2013 at 9:25 AM, Andrey Korolyov wrote:
> On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum (mailto:g...@inktank.com)> wrote:
> > On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote:
> > > On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum > > (mailto:g...@inktank.com)> wr
On Sun, Mar 17, 2013 at 8:14 PM, Gregory Farnum wrote:
> On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote:
>> On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum > (mailto:g...@inktank.com)> wrote:
>> > On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote:
>> > > Hi,
>> > >
>> > > fr
RADOS doesn't store a list of objects. The RADOS Gateway uses a separate data
format on top of objects stored in RADOS, and it keeps a per-user list of
buckets and a per-bucket index of objects as "omap" objects in the OSDs (which
ultimately end up in a leveldb store). A bucket index is currentl
On Friday, March 15, 2013 at 2:02 AM, Li, Chen wrote:
> I need to create the directory “/var/lib/ceph/mds/mds.$id ”by hand, right ?
>
> I start the service as you said, and it is succeed.
> But, no “mds.$id” directory exist.
> Will this affect it working?
>
> And, what will be installed in the
On Sunday, March 17, 2013 at 9:09 AM, Andrey Korolyov wrote:
> On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum (mailto:g...@inktank.com)> wrote:
> > On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote:
> > > Hi,
> > >
> > > from osd tree:
> > >
> > > -16 4.95 host 10.5.0.52
> > > 32 1.
On Sun, Mar 17, 2013 at 7:56 PM, Gregory Farnum wrote:
> On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote:
>> Hi,
>>
>> from osd tree:
>>
>> -16 4.95 host 10.5.0.52
>> 32 1.9 osd.32 up 2
>> 33 1.05 osd.33 up 1
>> 34 1 osd.34 up 1
>> 35 1 osd.35 up 1
>>
>> df -h:
>> /dev/sdd3 3.7T 595G 3
On Sunday, March 17, 2013 at 4:46 AM, Andrey Korolyov wrote:
> Hi,
>
> from osd tree:
>
> -16 4.95 host 10.5.0.52
> 32 1.9 osd.32 up 2
> 33 1.05 osd.33 up 1
> 34 1 osd.34 up 1
> 35 1 osd.35 up 1
>
> df -h:
> /dev/sdd3 3.7T 595G 3.1T 16% /var/lib/ceph/osd/32
> /dev/sde3 3.7T 332G 3.4T 9% /var/
On 03/16/2013 04:36 PM, Patrick McGarry wrote:
Hey guys,
TGT has indeed been patch with the first pass at iSCSI work by
Inktanker Dan Mick. This should probably be considered a 'tech
preview' as it is quite new. Expect a blog entry to show up on the
ceph.com blog in a week or two from Dan about
On Sun, Mar 17, 2013 at 4:35 PM, Mark Nelson wrote:
> On 03/17/2013 06:46 AM, Andrey Korolyov wrote:
>>
>> Hi,
>>
>> from osd tree:
>>
>> -16 4.95host 10.5.0.52
>> 32 1.9 osd.32 up 2
>> 33 1.05osd.33 u
On 03/17/2013 06:46 AM, Andrey Korolyov wrote:
Hi,
from osd tree:
-16 4.95host 10.5.0.52
32 1.9 osd.32 up 2
33 1.05osd.33 up 1
34 1 osd.34 up 1
35
Hi,
from osd tree:
-16 4.95host 10.5.0.52
32 1.9 osd.32 up 2
33 1.05osd.33 up 1
34 1 osd.34 up 1
35 1 osd.35 up 1
Hi,
what does this command line exactly do : ceph pg force_create_pg 0.c
???
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
I have make rep value equal to 0 where the least value for rep equal to 1, so i
returned rep to the default value 2.
When i test the health it still gave me that all my pgs are stuck stale:
HEALTH_WARN 1728 pgs stale; 1728 pgs stuck stale
I've checked the OSDs and all are up!!
id
29 matches
Mail list logo