Hi Vikas,
I'm directing your question to the Ceph user mailing list. You're more likely
to get answers there.
Cheers
On 21/12/2013 08:12, Vikas Parashar wrote:
> Hi,
>
> Recently, have started tinkering in openstack project. Could anybody please
> let me know. May i create a VM with 8-cores a
Mario Giammarco writes:
>
> Hello,
> I would like to install ceph on a Netgear ReadyNAS 102.
> It is a debian wheezy based.
> I have tried to add ceph repository but nas is "armel" architecture and I
> see you provide a repo for "armhf" architecture.
>
> How can I solve this problem?
>
> Thank
Thanks Loic
On Sat, Dec 21, 2013 at 2:40 PM, Loic Dachary wrote:
> Hi Vikas,
>
> I'm directing your question to the Ceph user mailing list. You're more
> likely to get answers there.
>
> Cheers
>
> On 21/12/2013 08:12, Vikas Parashar wrote:
> > Hi,
> >
> > Recently, have started tinkering in op
On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
wrote:
> Ceph FS is really cool and exciting! It makes a lot of sense for us to
> leverage it.
>
> Is there any established goal / timelines for using Ceph FS for production
> use? Are specific individual support contracts available if Ceph FS is t
Can someone please help out here?
On Sat, Dec 21, 2013 at 9:47 AM, hemant burman wrote:
> Hello,
>
> We have boxes with 24 Drives, 2TB each and want to run one OSD per drive.
> What should be the ideal Memory requirement of the system, keeping in mind
> that OSD Rebalancing and failure/replicati
recommended 1 GB of RAM on one OSD disk.
21 дек. 2013 г. 17:54 пользователь "hemant burman"
написал:
>
> Can someone please help out here?
>
>
> On Sat, Dec 21, 2013 at 9:47 AM, hemant burman wrote:
>
>> Hello,
>>
>> We have boxes with 24 Drives, 2TB each and want to run one OSD per drive.
>> Wha
On 12/21/2013 02:50 PM, Yan, Zheng wrote:
On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
wrote:
Ceph FS is really cool and exciting! It makes a lot of sense for us to
leverage it.
Is there any established goal / timelines for using Ceph FS for production
use? Are specific individual support
On 12/21/2013 12:00 PM, Mario Giammarco wrote:
Mario Giammarco writes:
Hello,
I would like to install ceph on a Netgear ReadyNAS 102.
It is a debian wheezy based.
I have tried to add ceph repository but nas is "armel" architecture and I
see you provide a repo for "armhf" architecture.
How ca
I usually like to go a little higher than 1GB per OSD personally. Given
what a box with 24 spinning disks costs, I'd be tempted to throw in 64G
of ram, but 32GB should do ok..
On 12/21/2013 09:22 AM, Ирек Фасихов wrote:
recommended 1 GB of RAM on one OSD disk.
21 дек. 2013 г. 17:54 пользоват
On 12/21/2013 10:04 AM, Wido den Hollander wrote:
On 12/21/2013 02:50 PM, Yan, Zheng wrote:
I don't know when inktank will claim Cephfs is stable. But as a cephfs
developer, I already have trouble to find new issue in my test setup.
If you are willing to help improve cephfs, please try cephfs
What has this have to do with ceph?
You can create a virtual machine with as many resources as a compute node has.
Since you can have "unlimited" virtual cores and also, using swap
space "unlimited" RAM, we could say "yes" to your question.
But if you are planning to use that machine for anythin
On 12/21/2013 05:31 PM, Mark Nelson wrote:
I usually like to go a little higher than 1GB per OSD personally. Given
what a box with 24 spinning disks costs, I'd be tempted to throw in 64G
of ram, but 32GB should do ok..
Indeed. The benefit of having more memory is that the page cache can do
i
So 1GB RAM per OSD of Size 2TB(just re-iterating) is good enough?
Because I read somewhere 1GB RAM/OSD of Size 1TB is whats recommended
-Hemant
On Sat, Dec 21, 2013 at 11:55 PM, Wido den Hollander wrote:
> On 12/21/2013 05:31 PM, Mark Nelson wrote:
>
>> I usually like to go a little higher tha
Hi.
I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.
Thank you very much!
Bye.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.
On 12/21/2013 07:45 PM, hemant burman wrote:
So 1GB RAM per OSD of Size 2TB(just re-iterating) is good enough?
Because I read somewhere 1GB RAM/OSD of Size 1TB is whats recommended
Well, 1GB is really the minimum. As Mark suggested, go for 64GB since
that probably isn't the most expensive par
On 12/21/2013 07:53 PM, shacky wrote:
Hi.
I am trying to understand how much space available I will get on my Ceph
cluster if I will use three servers with 4x4Tb hard drives each.
I all depends on the replication level you use, but let's assume 3.
So you get the capacity of one machine.
4 *
It is also useful to mention that you can set the noout flag when doing
maintenance of any given length needs to exceeds the 'mon osd down out
interval'.
$ ceph osd set noout
** no re-balancing will happen **
$ ceph osd unset noout
** normal re-balancing rules will resume **
- Mike Dawson
I think my wording was a bit misleading in my last message. Instead of
"no re-balancing will happen", I should have said that no OSDs will be
marked out of the cluster with the noout flag set.
- Mike
On 12/21/2013 2:06 PM, Mike Dawson wrote:
It is also useful to mention that you can set the n
>
> I all depends on the replication level you use, but let's assume 3.
>
Does replication level 3 mean that the data are all replicated three times
in the cluster?
So you get the capacity of one machine.
>
> 4 * 1000 * 1000 / 1024 /1024 = 3.81TB
>
> This would result in 15.24TB of raw space per
On Dec 21, 2013 12:32 PM, "shacky" wrote:
>>
>> I all depends on the replication level you use, but let's assume 3.
>
>
> Does replication level 3 mean that the data are all replicated three
times in the cluster?
>
Replication is set on a per pool basis. You can set some, or all, pools to
replica
On Sun, Dec 22, 2013 at 12:04 AM, Wido den Hollander wrote:
> On 12/21/2013 02:50 PM, Yan, Zheng wrote:
>>
>> On Fri, Dec 20, 2013 at 3:07 AM, Abhijeet Nakhwa
>> wrote:
>>>
>>> Ceph FS is really cool and exciting! It makes a lot of sense for us to
>>> leverage it.
>>>
>>> Is there any established
This release includes a few critical bug fixes for the radosgw, including
a fix for hanging operations on large objects. There are also several bug
fixes for radosgw multi-site replications, and a few backported features.
Also, notably, the osd perf command (which dumps recent performance
infor
This is the second bugfix release for the v0.72.x Emperor series. We
have fixed a hang in radosgw, and fixed (again) a problem with monitor
CLI compatiblity with mixed version monitors. (In the future this
will no longer be a problem.)
Upgrading:
* The JSON schema for the 'osd pool set ...' com
23 matches
Mail list logo