[ceph-users] Ceph's RBD flattening and image options

2015-06-30 Thread Michał Chybowski
r image mapping. I can post you our ceph.conf and CRUSH map if needed. -- Regards Michał Chybowski Tiktalik.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph on XenServer - Using RBDSR

2017-02-25 Thread Michał Chybowski
mounts working on other pool members of an existing pool. Let me know if you have any questions. Cheers, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph Bluestore

2017-03-14 Thread Michał Chybowski
uot; and "don'ts" in matter of OSD storage type (bluestore / xfs / ext4 / btrfs), correct "journal-to-storage-drive-size" ratio and monitor placement in very limited space (dedicated machines just for MONs are not an option). -- Regards Michał Chybowski _

Re: [ceph-users] Ceph Bluestore

2017-03-15 Thread Michał Chybowski
Hello, your subject line has little relevance to your rather broad questions. On Tue, 14 Mar 2017 23:45:26 +0100 Michał Chybowski wrote: Hi, I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per node) to test if ceph in such small scale is going to perform good enou

Re: [ceph-users] Ceph Bluestore

2017-03-15 Thread Michał Chybowski
W dniu 15.03.2017 o 09:05, Eneko Lacunza pisze: Hi Michal, El 14/03/17 a las 23:45, Michał Chybowski escribió: I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per node) to test if ceph in such small scale is going to perform good enough to put it into produ

[ceph-users] Ceph RBD performance

2015-12-14 Thread Michał Chybowski
cephx auth_client_required = cephx filestore_xattr_use_omap = true public network = 10.31.7.21/24 cluster network = 10.32.7.21/24 osd pool default size = 2 osd pool default min size = 1 Is there anything I could do to at least get 10*1HDD performance on single RBD mapping? -- Pozdr

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Michał Chybowski
Or, if You have already set partitions, You can do it with this command: ceph-deploy osd prepare machine:/dev/sdb1:/dev/sdb2 where /dev/sdb1 is Your data partition and /dev/sdb2 is Your journal one. Regards Michał Chybowski Tiktalik.com W dniu 17.12.2015 o 12:46, Loic Dachary pisze: Hi, You

Re: [ceph-users] Infernalis upgrade breaks when journal on separate partition

2016-01-15 Thread Michał Chybowski
In my case one server was also non-GPT installed and in /usr/sbin/ceph-disk I've added line: os.chmod(os.path.join(path,'journal'), 0777) after line 1926. I know that it's very ugly and shouldn't be made on production, but I had no time to search for proper way to f

Re: [ceph-users] Separate hosts for osd and its journal

2016-02-10 Thread Michał Chybowski
"Remote journal"? No, don't do it even if it'd be possible via NFS or any kind of network-FS. You could always keep the journal on HDD (yes, I know it's not what You wanted to achieve, but I don't think that setting journal on remote machine would be a good idea in any way) Regards Michał W

Re: [ceph-users] User Interface

2016-03-02 Thread Michał Chybowski
Unfortunately, VSM can manage only pools / clusters created by itself. Pozdrawiam Michał Chybowski Tiktalik.com W dniu 02.03.2016 o 20:23, Василий Ангапов pisze: You may also look at Intel Virtual Storage Manager: https://github.com/01org/virtual-storage-manager 2016-03-02 13:57 GMT+03:00