is presentation:
http://www.inktank.com/resource/end-of-raid-as-we-know-it-with-ceph-replication/
--
----
Emmanuel Florac | Direction technique
| Intellique
|
y been banned as rude/obnoxious/spam by
most users here and won't get any help.
2° this is certainly a virtual environment problem completely unrelated
to Ceph. You should ask for help in a Virtualbox mailing list.
--
------
very very latest version of xfs_repair
(3.2.2) ?
--
--------
Emmanuel Florac | Direction technique
| Intellique
_xattr_use_omap = true
> public_network = 10.129.0.0/16
>
>
> this is the cehph conf file
Did you test the local filesystem performance of your servers?
--
--------
Emmanuel Florac | Dir
but works pretty well overall.
--
--------
Emmanuel Florac | Direction technique
| Intellique
|
| +33 1 78 94 84 02
___
ceph
ed to lower it a
> bit, since we are getting the max from our sas disks 100-110 iops per
> disk (3TB osd's), any advice? Flashcache?
--
--------
Emmanuel Florac | Direction technique
less
power (almost half), and that's the main selling point IMO, with
durability.
--
----
Emmanuel Florac | Direction technique
| Intellique
|
| +
Looks like a silly idea if you ask
me (because we already have several excellent filesystems; because
developing a reliable filesystem is DAMN HARD; because building a
feature-complete FS is CRAZY HARD; because FTL sucks anyway; etc).
--
------
the higher-end model for
cheap :)
--
--------
Emmanuel Florac | Direction technique
| Intellique
|
better post this on the xfs
mailing list, though: linux-xfs (at) vger.kernel.org
--
----
Emmanuel Florac | Direction technique
| Intellique
|
| +
10 matches
Mail list logo