I'm looking at this:
https://github.com/ceph/ceph-cookbooks
seems to support the whole ceph stack (rgw, mons, osd, msd)
Here:
http://wiki.ceph.com/Guides/General_Guides/Deploying_Ceph_with_Chef#Configure_your_Ceph_Environment
I can see that I need to configure the environment as for example and
I
Hello
we use libvirt from wheezy-backports
Le 29/01/2014 04:13, Schlacta, Christ a écrit :
Thank you zorg :) In theory it does help, however, I've already got
it installed currently from a local repository. I'm planning to throw
that local repo into ceph and call it a day here. I did notic
Am 28.01.2014 17:25, schrieb Peter Matulis:
> Within the formula [1] there is an assumption that all pools contain the
> same number of objects. That's nearly always not the case.
From my experience, the two conditions that should be fulfilled are:
1. There should be a sufficient number of PGs p
Hi,
At least it used to be like that - I'm not sure if that has changed. I
believe this is also part why it is adviced to go with the same kind of hw
and setup if possible.
Since at least rbd images are spread in objects throughout the cluster
you'll prob. have to wait for a slow disk when readin
Hi,
i can't mount.ceph:
/mount.ceph bd-0:/ /mnt/myceph -v -o
name=admin,secretfile=/etc/ceph/admin.secret//
//parsing options: name=admin,secretfile=/etc/ceph/admin.secret//
//mount error 5 = Input/output error/
ceph -s is OK,
When i look at /var/log/syslog, i can see:
/Jan 29 12:22:06 bd-0 k
On Wed, Jan 29, 2014 at 2:46 AM, McNamara, Bradley
wrote:
> My two questions are: is there any way to recreate the pools with the
> original pool number value that they had? Does it really matter that data
> pool is now 3, and rbd is now 4?
You can't recreate pools with already-used IDs.
It do
On Wed, Jan 29, 2014 at 1:58 PM, Markus Goldberg
wrote:
> Hi,
> i can't mount.ceph:
> mount.ceph bd-0:/ /mnt/myceph -v -o
> name=admin,secretfile=/etc/ceph/admin.secret
> parsing options: name=admin,secretfile=/etc/ceph/admin.secret
> mount error 5 = Input/output error
>
> ceph -s is OK,
>
> When
On 01/28/2014 09:46 PM, McNamara, Bradley wrote:
> I finally have my first test cluster up and running. No data on it,
> yet. The config is: three mons, and three OSDS servers. Each OSDS
> server has eight 4TB SAS drives and two SSD journal drives.
>
>
>
> The cluster is healthy, so I start
Hi,
I would like to customize my ceph.conf generated by ceph-deploy.
Should I customize ceph.conf stored on admin node and then sync it on
each ceph nodes?
If yes:
1. can I sync directly from ceph-deploy or I have to sync manually via scp ?
2. I don't see any host definition in ceph.conf, what wi
On Tue, Jan 28, 2014 at 6:43 PM, Stuart Longland wrote:
> Hi Gregory,
> On 28/01/14 15:51, Gregory Farnum wrote:
>>> I do note ntp doesn't seem to be doing its job, but that's a side issue.
>> Actually, that could be it. If you take down one of the monitors and
>> the other two have enough of a ti
Unless I misunderstand this, three OSD servers, each with eight OSD's, for a
total of 24 OSD's. The formula is(as I understand it): Total PG's = 100 x
24/3. And, I see an error in my simple math! I should have said 800. Is that
more what you were expecting?
Brad
-Original Message
In researching the creation of a block device to my Ceph object store, I see
there are two options for getting hold of the RBD module in RHEL (until RedHat
includes the necessary modules by default). Please correct me if I'm wrong.
- Use an EPEL repo
We don't have external internet
On 01/29/2014 12:27 PM, alistair.whit...@barclays.com wrote:
> We will not be able to deploy anything other than a fully supported RedHat
> kernel
in which case your only option is probably RHEL 7 and hope they didn't
exclude ceph modules from their kernel.
Stock centos 6.5 kernel does not have
On Jan 29, 2014 10:44 AM, "Dimitri Maziuk" wrote:
>
> On 01/29/2014 12:40 PM, Schlacta, Christ wrote:
> > Why can't you compile it yourself using rhel's equivalent of dkms?
>
> Because of
>
> >>> fully supported RedHat
> ^^^
Dkms is red hat technology. they developed it.
On 01/29/2014 12:47 PM, Schlacta, Christ wrote:
> Dkms is red hat technology. they developed it. Whether or not they support
> it I don't know... what do know is that dkms by design didn't modify your
> running, installed, "fully supported RedHat" kernel. This is in fact why
> and how RedHat desi
Hi Guys
We have the current config :
2 x Storage servers, 128gb RAM, dual E5-2609, LSI MegaRAID SAS 9271-4i, each
server has 24 x 3tb Disks these were originally set as 8 groups of 3 Disk
RAID0 (we are slowly moving to one OSD one disk) We initially had the
journals stored on an SSD, however af
I can only comment I the log. I would recommend using three logs (6 disks
as mirror pairs) per system, and add a crush map hierarchy level for cache
drive so that any given pg will never mirror twice to the same log. That'll
also reduce your failure domain.
On Jan 29, 2014 4:26 PM, "Geraint Jones"
17 matches
Mail list logo