Re: [ceph-users] tgt+librbd error 4

2016-12-18 Thread Bruno Silva
But FreeNAS is based on FreeBSD. Em dom, 18 de dez de 2016 00:40, ZHONG escreveu: > Thank you for your reply。 > > 在 2016年12月17日,22:21,Jake Young 写道: > > FreeNAS running in KVM Linux hypervisor > > > ___ > ceph-users mailing list > ceph-users@lists.cep

Re: [ceph-users] tgt+librbd error 4

2016-12-18 Thread Jake Young
It's running as a guest in a Linux hypervisor. I'm mapping rbd disks attached to a virtual scsi adaptor (so they can be added and removed). I've configured FreeNAS to just share each disk as an iSCSI LUN, rather than configuring a ZFS pool with the disks. Jake On Sun, Dec 18, 2016 at 8:37 AM Br

Re: [ceph-users] tgt+librbd error 4

2016-12-18 Thread Oliver Humpage
> On 18 Dec 2016, at 13:54, Jake Young wrote: > > It's running as a guest in a Linux hypervisor. > > I'm mapping rbd disks attached to a virtual scsi adaptor (so they can be > added and removed). Nice :) We’ve had success with LIO, it works well with multipathing across two gateways and i

[ceph-users] Calamari problem

2016-12-18 Thread Vaysman, Marat
I successfully installed without problem on a system running Centos 7.2 calamari-clients-1.2.2-32_g931ee58.el7.centos.x86_64.rpm calamari-server-1.3.0.1-49_g828960a.el7.centos.x86_64.rpm The calamari server processes started successfully carbon-cache RUNNING pid 9080, uptim

[ceph-users] fio librbd result is poor

2016-12-18 Thread 马忠明
Hi guys, So recently I was testing our ceph cluster which mainly used for block usage(rbd). We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However the result of fio is very poor. We tested the workload on ssd pool with following parameter : "fio --size=50G \ --ioe

Re: [ceph-users] cephfs quota

2016-12-18 Thread gjprabu
Hi Goncalo, Thanks its working quota by using --client-quota option and once again thank you all for helping this issue. Regards Prabu GJ On Sat, 17 Dec 2016 06:30:04 +0530 Goncalo Borges wrote Hi all Even when using ceph fuse, quo

Re: [ceph-users] fio librbd result is poor

2016-12-18 Thread Christian Balzer
Hello, On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote: > Hi guys, > > So recently I was testing our ceph cluster which mainly used for block > usage(rbd). > > We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However > the result of fio is very poor. > All relevant deta

Re: [ceph-users] fio librbd result is poor

2016-12-18 Thread mazhongming
Hi Christian, Thanks for your reply. At 2016-12-19 14:01:57, "Christian Balzer" wrote: > >Hello, > >On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote: > >> Hi guys, >> >> So recently I was testing our ceph cluster which mainly used for block >> usage(rbd). >> >> We have 30 ssd drives total(5

Re: [ceph-users] fio librbd result is poor

2016-12-18 Thread Christian Balzer
Hello, On Mon, 19 Dec 2016 15:05:05 +0800 (CST) mazhongming wrote: > Hi Christian, > Thanks for your reply. > > > At 2016-12-19 14:01:57, "Christian Balzer" wrote: > > > >Hello, > > > >On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote: > > > >> Hi guys, > >> > >> So recently I was testing o