Has anyone else ran into this error? I've tried different versions of GCC
and even CentOS and RHEL to compile the calamari but continues to fail and
by the way the instructions on the ceph website are not correct because the
virtual used with vagrant isn't complete with which ever versions they us
hello
I also use inkscope to visualise cluster state, crush rules, and manage
radosgw users.
https://github.com/inkscope/inkscope
best regards!
--
pawel
On Wed, Dec 24, 2014 at 7:28 AM, Udo Lembke wrote:
>
> Hi,
> for monitoring only I use the Ceph Dashboard
> https://github.com/Crapworks/ce
Hi Mykola,
On Wed, 24 Dec 2014 17:22:13 Mykola Golub wrote:
> I stumbled upon this feature request from Dmitry, to make osd tree
> show primary-affinity value:
>
> http://tracker.ceph.com/issues/10036
>
> This looks useful in some cases and is simple to implement, so here is
> the patch:
>
>
Hi Sage
To be sure to have the good understanding : if I reached the max number of PG
per OSD with for example 4 pools, and I have to create 2 new pools without
adding OSD, I need to migrate old pools to less PGs pool, right ?
Thanks
Sent from my iPhone
> On 23 déc. 2014, at 15:39, Sage Weil
On Tue, 16 Dec 2014 11:50:37 AM Robert LeBlanc wrote:
> COW into the snapshot (like VMware, Ceph, etc):
> When a write is committed, the changes are committed to a diff file and the
> base file is left untouched. This only has a single write penalty, if you
> want to discard the child, it is fast a
After apt-get update and upgrade i stil see 0.87 release .. any hint ?
/Zee
On Fri, Dec 26, 2014 at 9:58 AM, Florent MONTHEL
wrote:
> Hi Sage
>
> To be sure to have the good understanding : if I reached the max number of
> PG per OSD with for example 4 pools, and I have to create 2 new pools
>
3 HOST:
1 CPU + 4Disks(3T SATA Disk)
Ceph version: 0.80.6
OS: Redhat 6.5
Cluster: 3 host, and have 3 MONs + 9 OSDs( One OSD hold one Disk)
1. When cluster status is Health_OK, I write a little data, then I can
find some block file in PG directory.
[root@rhls-test2 release]# ll data/osd/
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power failure - this is the case, even with ceph?
side not