r
image mapping.
I can post you our ceph.conf and CRUSH map if needed.
--
Regards
Michał Chybowski
Tiktalik.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
mounts working on other pool members of
an existing pool.
Let me know if you have any questions.
Cheers,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
uot; and "don'ts" in matter of OSD storage type
(bluestore / xfs / ext4 / btrfs), correct
"journal-to-storage-drive-size" ratio and monitor placement in very
limited space (dedicated machines just for MONs are not an option).
--
Regards
Michał Chybowski
_
Hello,
your subject line has little relevance to your rather broad questions.
On Tue, 14 Mar 2017 23:45:26 +0100 Michał Chybowski wrote:
Hi,
I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per
node) to test if ceph in such small scale is going to perform good
enou
W dniu 15.03.2017 o 09:05, Eneko Lacunza pisze:
Hi Michal,
El 14/03/17 a las 23:45, Michał Chybowski escribió:
I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs
per node) to test if ceph in such small scale is going to perform
good enough to put it into produ
cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 10.31.7.21/24
cluster network = 10.32.7.21/24
osd pool default size = 2
osd pool default min size = 1
Is there anything I could do to at least get 10*1HDD performance on
single RBD mapping?
--
Pozdr
Or, if You have already set partitions, You can do it with this command:
ceph-deploy osd prepare machine:/dev/sdb1:/dev/sdb2
where /dev/sdb1 is Your data partition and /dev/sdb2 is Your journal one.
Regards
Michał Chybowski
Tiktalik.com
W dniu 17.12.2015 o 12:46, Loic Dachary pisze:
Hi,
You
In my case one server was also non-GPT installed and in
/usr/sbin/ceph-disk I've added line:
os.chmod(os.path.join(path,'journal'), 0777) after line 1926.
I know that it's very ugly and shouldn't be made on production, but I
had no time to search for proper way to f
"Remote journal"? No, don't do it even if it'd be possible via NFS or
any kind of network-FS.
You could always keep the journal on HDD (yes, I know it's not what You
wanted to achieve, but I don't think that setting journal on remote
machine would be a good idea in any way)
Regards
Michał
W
Unfortunately, VSM can manage only pools / clusters created by itself.
Pozdrawiam
Michał Chybowski
Tiktalik.com
W dniu 02.03.2016 o 20:23, Василий Ангапов pisze:
You may also look at Intel Virtual Storage Manager:
https://github.com/01org/virtual-storage-manager
2016-03-02 13:57 GMT+03:00
10 matches
Mail list logo