Re: [ceph-users] [openstack-community] Create VM (8 core and 8GB memory)

2013-12-21 Thread Loic Dachary
Hi Vikas,

I'm directing your question to the Ceph user mailing list. You're more likely 
to get answers there.

Cheers

On 21/12/2013 08:12, Vikas Parashar wrote:
> Hi,
> 
> Recently, have started tinkering in openstack project. Could anybody please 
> let me know. May i create a VM with 8-cores and 8-GB ram? If i have provided 
> below infrastructure.
> 
> I am using virtualization enable 10 machines with 2 core, 4GB ram in each 
> machine.
>  
> 
> 
> 
> 
> ___
> Community mailing list
> commun...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Armel debian repository

2013-12-21 Thread Mario Giammarco
Mario Giammarco  writes:

> 
> Hello,
> I would like to install ceph on a Netgear ReadyNAS 102.
> It is a debian wheezy based.
> I have tried to add ceph repository but nas is "armel" architecture and I
> see you provide a repo for "armhf" architecture.
> 
> How can I solve this problem?
> 
> Thanks,
> Mario
> 


Hello again,
noone can help me?
A tutorial? A small hint?
Crosscompiling?
Some armel repository?

Thanks again,
Mario


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [openstack-community] Create VM (8 core and 8GB memory)

2013-12-21 Thread Vikas Parashar
Thanks Loic


On Sat, Dec 21, 2013 at 2:40 PM, Loic Dachary  wrote:

> Hi Vikas,
>
> I'm directing your question to the Ceph user mailing list. You're more
> likely to get answers there.
>
> Cheers
>
> On 21/12/2013 08:12, Vikas Parashar wrote:
> > Hi,
> >
> > Recently, have started tinkering in openstack project. Could anybody
> please let me know. May i create a VM with 8-cores and 8-GB ram? If i have
> provided below infrastructure.
> >
> > I am using virtualization enable 10 machines with 2 core, 4GB ram in
> each machine.
> >
> >
> >
> >
> >
> > ___
> > Community mailing list
> > commun...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/community
> >
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] When will Ceph FS be ready for use with production data

2013-12-21 Thread Yan, Zheng
On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
 wrote:
> Ceph FS is really cool and exciting! It makes a lot of sense for us to
> leverage it.
>
> Is there any established goal  / timelines for using Ceph FS for production
> use? Are specific individual support contracts available if Ceph FS is to be
> used in production?
>

I don't know when inktank will claim Cephfs is stable. But as a cephfs
developer, I already have trouble to find new issue in my test setup.
If you are willing to help improve cephfs, please try cephfs and
report any issue you encounter.

Regards
Yan, Zheng


> Thanks!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread hemant burman
Can someone please help out here?


On Sat, Dec 21, 2013 at 9:47 AM, hemant burman wrote:

> Hello,
>
> We have boxes with 24 Drives, 2TB each and want to run one OSD per drive.
> What should be the ideal Memory requirement of the system, keeping in mind
> that OSD Rebalancing and failure/replication of say 10-15TB data
>
> -Hemant
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Ирек Фасихов
recommended 1 GB of RAM on one OSD disk.
21 дек. 2013 г. 17:54 пользователь "hemant burman" 
написал:

>
> Can someone please help out here?
>
>
> On Sat, Dec 21, 2013 at 9:47 AM, hemant burman wrote:
>
>> Hello,
>>
>> We have boxes with 24 Drives, 2TB each and want to run one OSD per drive.
>> What should be the ideal Memory requirement of the system, keeping in
>> mind that OSD Rebalancing and failure/replication of say 10-15TB data
>>
>> -Hemant
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] When will Ceph FS be ready for use with production data

2013-12-21 Thread Wido den Hollander

On 12/21/2013 02:50 PM, Yan, Zheng wrote:

On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
 wrote:

Ceph FS is really cool and exciting! It makes a lot of sense for us to
leverage it.

Is there any established goal  / timelines for using Ceph FS for production
use? Are specific individual support contracts available if Ceph FS is to be
used in production?



I don't know when inktank will claim Cephfs is stable. But as a cephfs
developer, I already have trouble to find new issue in my test setup.
If you are willing to help improve cephfs, please try cephfs and
report any issue you encounter.



Great to hear. Are you also testing Multi-MDS or just one Active/Standby?

And snapshots? Those were giving some problems as well.


Regards
Yan, Zheng



Thanks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Armel debian repository

2013-12-21 Thread Wido den Hollander

On 12/21/2013 12:00 PM, Mario Giammarco wrote:

Mario Giammarco  writes:



Hello,
I would like to install ceph on a Netgear ReadyNAS 102.
It is a debian wheezy based.
I have tried to add ceph repository but nas is "armel" architecture and I
see you provide a repo for "armhf" architecture.

How can I solve this problem?



What version of ARM CPU is in the Netgear NAS?

Since the packages are build for ARMv7 and for example don't work on a 
RaspberryPi which is ARMv6.


Another solution would be to build to packages manually for the Netgear NAS.


Thanks,
Mario




Hello again,
noone can help me?
A tutorial? A small hint?
Crosscompiling?
Some armel repository?

Thanks again,
Mario


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Mark Nelson
I usually like to go a little higher than 1GB per OSD personally.  Given 
what a box with 24 spinning disks costs, I'd be tempted to throw in 64G 
of ram, but 32GB should do ok..


On 12/21/2013 09:22 AM, Ирек Фасихов wrote:

recommended 1 GB of RAM on one OSD disk.

21 дек. 2013 г. 17:54 пользователь "hemant burman"
mailto:hemant.bur...@gmail.com>> написал:


Can someone please help out here?


On Sat, Dec 21, 2013 at 9:47 AM, hemant burman
mailto:hemant.bur...@gmail.com>> wrote:

Hello,

We have boxes with 24 Drives, 2TB each and want to run one OSD
per drive.
What should be the ideal Memory requirement of the system,
keeping in mind that OSD Rebalancing and failure/replication of
say 10-15TB data

-Hemant



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] When will Ceph FS be ready for use with production data

2013-12-21 Thread Dimitri Maziuk

On 12/21/2013 10:04 AM, Wido den Hollander wrote:

On 12/21/2013 02:50 PM, Yan, Zheng wrote:



I don't know when inktank will claim Cephfs is stable. But as a cephfs
developer, I already have trouble to find new issue in my test setup.
If you are willing to help improve cephfs, please try cephfs and
report any issue you encounter.



Great to hear. Are you also testing Multi-MDS or just one Active/Standby?

And snapshots? Those were giving some problems as well.


What was it I heard about performance tiers? Last I tried cephfs was 
spreading i/o "fairly" over osds, fast and slow, with no way to tune 
that up.


Thanks,
Dima


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [openstack-community] Create VM (8 core and 8GB memory)

2013-12-21 Thread Cristian Falcas
What has this have to do with ceph?

You can create a virtual machine with as many resources as a compute node has.

Since you can have "unlimited" virtual cores and also, using swap
space "unlimited" RAM, we could say "yes" to your question.

But if you are planning to use that machine for anything, I would say
that you can have a VM with maximum 2 cores and 3GB of ram.

Best regards,
Cristian Falcas

On Sat, Dec 21, 2013 at 1:52 PM, Vikas Parashar  wrote:
> Thanks Loic
>
>
> On Sat, Dec 21, 2013 at 2:40 PM, Loic Dachary  wrote:
>>
>> Hi Vikas,
>>
>> I'm directing your question to the Ceph user mailing list. You're more
>> likely to get answers there.
>>
>> Cheers
>>
>> On 21/12/2013 08:12, Vikas Parashar wrote:
>> > Hi,
>> >
>> > Recently, have started tinkering in openstack project. Could anybody
>> > please let me know. May i create a VM with 8-cores and 8-GB ram? If i have
>> > provided below infrastructure.
>> >
>> > I am using virtualization enable 10 machines with 2 core, 4GB ram in
>> > each machine.
>> >
>> >
>> >
>> >
>> >
>> > ___
>> > Community mailing list
>> > commun...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>> >
>>
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Wido den Hollander

On 12/21/2013 05:31 PM, Mark Nelson wrote:

I usually like to go a little higher than 1GB per OSD personally.  Given
what a box with 24 spinning disks costs, I'd be tempted to throw in 64G
of ram, but 32GB should do ok..



Indeed. The benefit of having more memory is that the page cache can do 
it's job and reduce the amount of read IOps on the disks.


Wido


On 12/21/2013 09:22 AM, Ирек Фасихов wrote:

recommended 1 GB of RAM on one OSD disk.

21 дек. 2013 г. 17:54 пользователь "hemant burman"
mailto:hemant.bur...@gmail.com>> написал:


Can someone please help out here?


On Sat, Dec 21, 2013 at 9:47 AM, hemant burman
mailto:hemant.bur...@gmail.com>> wrote:

Hello,

We have boxes with 24 Drives, 2TB each and want to run one OSD
per drive.
What should be the ideal Memory requirement of the system,
keeping in mind that OSD Rebalancing and failure/replication of
say 10-15TB data

-Hemant



___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread hemant burman
So 1GB RAM per OSD of Size 2TB(just re-iterating) is good enough?
Because I read somewhere 1GB RAM/OSD of Size 1TB is whats recommended

-Hemant


On Sat, Dec 21, 2013 at 11:55 PM, Wido den Hollander  wrote:

> On 12/21/2013 05:31 PM, Mark Nelson wrote:
>
>> I usually like to go a little higher than 1GB per OSD personally.  Given
>> what a box with 24 spinning disks costs, I'd be tempted to throw in 64G
>> of ram, but 32GB should do ok..
>>
>>
> Indeed. The benefit of having more memory is that the page cache can do
> it's job and reduce the amount of read IOps on the disks.
>
> Wido
>
>
>  On 12/21/2013 09:22 AM, Ирек Фасихов wrote:
>>
>>> recommended 1 GB of RAM on one OSD disk.
>>>
>>> 21 дек. 2013 г. 17:54 пользователь "hemant burman"
>>> mailto:hemant.bur...@gmail.com>> написал:
>>>
>>>
>>> Can someone please help out here?
>>>
>>>
>>> On Sat, Dec 21, 2013 at 9:47 AM, hemant burman
>>> mailto:hemant.bur...@gmail.com>> wrote:
>>>
>>> Hello,
>>>
>>> We have boxes with 24 Drives, 2TB each and want to run one OSD
>>> per drive.
>>> What should be the ideal Memory requirement of the system,
>>> keeping in mind that OSD Rebalancing and failure/replication of
>>> say 10-15TB data
>>>
>>> -Hemant
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How much space?

2013-12-21 Thread shacky
Hi.

I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.

Thank you very much!
Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Wido den Hollander

On 12/21/2013 07:45 PM, hemant burman wrote:

So 1GB RAM per OSD of Size 2TB(just re-iterating) is good enough?
Because I read somewhere 1GB RAM/OSD of Size 1TB is whats recommended



Well, 1GB is really the minimum. As Mark suggested, go for 64GB since 
that probably isn't the most expensive part of the machine.


You could go for 32GB of 48GB, but more memory == better.

Wido


-Hemant


On Sat, Dec 21, 2013 at 11:55 PM, Wido den Hollander mailto:w...@42on.com>> wrote:

On 12/21/2013 05:31 PM, Mark Nelson wrote:

I usually like to go a little higher than 1GB per OSD
personally.  Given
what a box with 24 spinning disks costs, I'd be tempted to throw
in 64G
of ram, but 32GB should do ok..


Indeed. The benefit of having more memory is that the page cache can
do it's job and reduce the amount of read IOps on the disks.

Wido


On 12/21/2013 09:22 AM, Ирек Фасихов wrote:

recommended 1 GB of RAM on one OSD disk.

21 дек. 2013 г. 17:54 пользователь "hemant burman"
mailto:hemant.bur...@gmail.com>
>> написал:


 Can someone please help out here?


 On Sat, Dec 21, 2013 at 9:47 AM, hemant burman
 mailto:hemant.bur...@gmail.com>
>> wrote:

 Hello,

 We have boxes with 24 Drives, 2TB each and want to
run one OSD
 per drive.
 What should be the ideal Memory requirement of the
system,
 keeping in mind that OSD Rebalancing and
failure/replication of
 say 10-15TB data

 -Hemant



 _
 ceph-users mailing list
ceph-users@lists.ceph.com 
>
http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com




_
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com



_
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on

_
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com






--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How much space?

2013-12-21 Thread Wido den Hollander

On 12/21/2013 07:53 PM, shacky wrote:

Hi.

I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.


I all depends on the replication level you use, but let's assume 3.

So you get the capacity of one machine.

4 * 1000 * 1000 / 1024 /1024 = 3.81TB

This would result in 15.24TB of raw space per machine, so 45TB of raw 
capacity in the cluster.


You probably shouldn't fill the disks over 80% just to be safe. So with 
3x replication you have 12.2TB of usable storage.


Wido



Thank you very much!
Bye.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rebooting nodes in a ceph cluster

2013-12-21 Thread Mike Dawson
It is also useful to mention that you can set the noout flag when doing 
maintenance of any given length needs to exceeds the 'mon osd down out 
interval'.


$ ceph osd set noout
** no re-balancing will happen **

$ ceph osd unset noout
** normal re-balancing rules will resume **


- Mike Dawson


On 12/19/2013 7:51 PM, Sage Weil wrote:

On Thu, 19 Dec 2013, John-Paul Robinson wrote:

What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster?  Can it trigger rebalancing activities that then have
to be undone once the node comes back up?

I have a 4 node ceph cluster each node has 11 osds.  There is a single
pool with redundant storage.

If it takes 15 minutes for one of my servers to reboot is there a risk
that some sort of needless automatic processing will begin?


By default, we start rebalancing data after 5 minutes.  You can adjust
this (to, say, 15 minutes) with

  mon osd down out interval = 900

in ceph.conf.

sage



I'm assuming that the ceph cluster can go into a "not ok" state but that
in this particular configuration all the data is protected against the
single node failure and there is no place for the data to migrate too so
nothing "bad" will happen.

Thanks for any feedback.

~jpr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rebooting nodes in a ceph cluster

2013-12-21 Thread Mike Dawson
I think my wording was a bit misleading in my last message. Instead of 
"no re-balancing will happen", I should have said that no OSDs will be 
marked out of the cluster with the noout flag set.


- Mike

On 12/21/2013 2:06 PM, Mike Dawson wrote:

It is also useful to mention that you can set the noout flag when doing
maintenance of any given length needs to exceeds the 'mon osd down out
interval'.

$ ceph osd set noout
** no re-balancing will happen **

$ ceph osd unset noout
** normal re-balancing rules will resume **


- Mike Dawson


On 12/19/2013 7:51 PM, Sage Weil wrote:

On Thu, 19 Dec 2013, John-Paul Robinson wrote:

What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster?  Can it trigger rebalancing activities that then have
to be undone once the node comes back up?

I have a 4 node ceph cluster each node has 11 osds.  There is a single
pool with redundant storage.

If it takes 15 minutes for one of my servers to reboot is there a risk
that some sort of needless automatic processing will begin?


By default, we start rebalancing data after 5 minutes.  You can adjust
this (to, say, 15 minutes) with

  mon osd down out interval = 900

in ceph.conf.

sage



I'm assuming that the ceph cluster can go into a "not ok" state but that
in this particular configuration all the data is protected against the
single node failure and there is no place for the data to migrate too so
nothing "bad" will happen.

Thanks for any feedback.

~jpr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How much space?

2013-12-21 Thread shacky
>
> I all depends on the replication level you use, but let's assume 3.
>

Does replication level 3 mean that the data are all replicated three times
in the cluster?

So you get the capacity of one machine.
>
> 4 * 1000 * 1000 / 1024 /1024 = 3.81TB
>
> This would result in 15.24TB of raw space per machine, so 45TB of raw
> capacity in the cluster.
>
> You probably shouldn't fill the disks over 80% just to be safe. So with 3x
> replication you have 12.2TB of usable storage.
>

 From 45TB to 12.2TB? So less?
I would expect something similar to RAID5 (about 36GB of usable storage).
I think I don't understand how Ceph works: could you help me to understand
well?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How much space?

2013-12-21 Thread JJ Galvez
On Dec 21, 2013 12:32 PM, "shacky"  wrote:
>>
>> I all depends on the replication level you use, but let's assume 3.
>
>
> Does replication level 3 mean that the data are all replicated three
times in the cluster?
>

Replication is set on a per pool basis. You can set some, or all, pools to
replica size of 2 instead of 3.

>> So you get the capacity of one machine.
>>
>> 4 * 1000 * 1000 / 1024 /1024 = 3.81TB
>>
>> This would result in 15.24TB of raw space per machine, so 45TB of raw
capacity in the cluster.
>>
>> You probably shouldn't fill the disks over 80% just to be safe. So with
3x replication you have 12.2TB of usable storage.
>
>
>  From 45TB to 12.2TB? So less?
> I would expect something similar to RAID5 (about 36GB of usable storage).
> I think I don't understand how Ceph works: could you help me to
understand well?
>

Ceph uses replication not erasure coding (unlike RAID). So data is
completely duplicated in multiple copies. Erasure coding is scheduled for
the Firefly release, according to the roadmap.

-JJ Galvez

>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] When will Ceph FS be ready for use with production data

2013-12-21 Thread Yan, Zheng
On Sun, Dec 22, 2013 at 12:04 AM, Wido den Hollander  wrote:
> On 12/21/2013 02:50 PM, Yan, Zheng wrote:
>>
>> On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
>>  wrote:
>>>
>>> Ceph FS is really cool and exciting! It makes a lot of sense for us to
>>> leverage it.
>>>
>>> Is there any established goal  / timelines for using Ceph FS for
>>> production
>>> use? Are specific individual support contracts available if Ceph FS is to
>>> be
>>> used in production?
>>>
>>
>> I don't know when inktank will claim Cephfs is stable. But as a cephfs
>> developer, I already have trouble to find new issue in my test setup.
>> If you are willing to help improve cephfs, please try cephfs and
>> report any issue you encounter.
>>
>
> Great to hear. Are you also testing Multi-MDS or just one Active/Standby?
>
I test both. For multiple mds setup (using the newest development
tree), basic fs functions are close to stable. but still need more
time to tune the performance.

> And snapshots? Those were giving some problems as well.
>
snapshot is the most incomplete feature of cephfs. So far I completely
ignore issues in this area.

>
>> Regards
>> Yan, Zheng
>>
>>
>>> Thanks!
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v0.67.5 dumpling released

2013-12-21 Thread Sage Weil
This release includes a few critical bug fixes for the radosgw, including 
a fix for hanging operations on large objects. There are also several bug 
fixes for radosgw multi-site replications, and a few backported features. 
Also, notably, the osd perf command (which dumps recent performance 
information about active OSDs) has been backported.

We recommend that all 0.67.x Dumpling users upgrade.

Notable changes:

 * ceph-fuse: fix crash in caching code
 * mds: fix looping in populate_mydir()
 * mds: fix standby-replay race
 * mon: accept osd pool set ... as string
 * mon: backport: osd perf command to dump recent OSD performance stats
 * osd: add feature compat check for upcoming object sharding
 * rbd.py: increase parent name size limit
 * rgw: backport: allow wildcard in supported keystone roles
 * rgw: backport: improve swift COPY behavior
 * rgw: backport: log and open admin socket by default
 * rgw: backport: validate S3 tokens against keystone
 * rgw: fix bucket removal
 * rgw: fix client error code for chunked PUT failure
 * rgw: fix hang on large object GET
 * rgw: fix rare use-after-free
 * rgw: various DR bug fixes
 * sysvinit, upstart: prevent starting daemons using both init systems

For the complete changelog, see

 * http://ceph.com/docs/master/_downloads/v0.67.5.txt

You can get v0.67.5 from the usual places:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.67.5.tar.gz
 * For packages, see http://ceph.com/docs/master/install/get-packages/
 * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v0.72.2 Emperor released

2013-12-21 Thread Sage Weil
This is the second bugfix release for the v0.72.x Emperor series.  We
have fixed a hang in radosgw, and fixed (again) a problem with monitor
CLI compatiblity with mixed version monitors.  (In the future this
will no longer be a problem.)

Upgrading:

* The JSON schema for the 'osd pool set ...' command changed slightly.  
  Please avoid issuing this particular command via the CLI while there is 
  a mix of v0.72.1 and v0.72.2 monitor daemons running.

Changes:

* mon: 'osd pool set ...' syntax change
* osd: added test for missing on-disk HEAD object
* osd: fix osd bench block size argument
* rgw: fix hang on large object GET
* rgw: fix rare use-after-free
* rgw: various DR bug fixes
* rgw: do not return error on empty owner when setting ACL
* sysvinit, upstart: prevent starting daemons using both init systems

For more detailed information, see the complete changelog:

 * http://ceph.com/docs/master/changelog/v0.72.2.txt

You can get v0.72.2 from the usual locations:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.72.2.tar.gz
 * For packages, see http://ceph.com/docs/master/install/get-packages/
 * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy/

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com