[ceph-users] advised needed for different projects design

2018-10-08 Thread Joshua Chen
Hello all, When planning for my institute's need, I would like to seek for design suggestions from you for my special situation: 1, I will support many projects, currently they are all nfs servers (and those nfs servers serve their clients respectively). For example nfsA (for clients belong to p

Re: [ceph-users] list admin issues

2018-10-06 Thread Joshua Chen
I also got removed once, got another warning once (need to re-enable). Cheers Joshua On Sun, Oct 7, 2018 at 5:38 AM Svante Karlsson wrote: > I'm also getting removed but not only from ceph. I subscribe > d...@kafka.apache.org list and the same thing happens there. > > Den lör 6 okt. 2018 kl 23

[ceph-users] provide cephfs to mutiple project

2018-10-03 Thread Joshua Chen
Hello all, I am almost ready to provide storage (cephfs in the beginning) to my colleagues, they belong to different main project, and according to their budget that are previously claimed, to have different capacity. For example ProjectA will have 50TB, ProjectB will have 150TB. I choosed cephf

Re: [ceph-users] mount cephfs from a public network ip of mds

2018-10-01 Thread Joshua Chen
IP address and they'll only listen > on that IP. > > As David suggested: check if you really need separate networks. This > setup usually creates more problems than it solves, especially if you > have one 1G and one 10G network. > > Paul > Am Mo., 1. Okt. 2018

Re: [ceph-users] mount cephfs from a public network ip of mds

2018-09-30 Thread Joshua Chen
>> further information on the cluster such as the IPs of MDS and OSDs. >> >> This means you need to provide the mon IPs to the mount command, not >> the MDS IPs. Your first command works by coincidence since >> you seem to run the mons and MDS' on the sam

[ceph-users] mount cephfs from a public network ip of mds

2018-09-29 Thread Joshua Chen
Hello all, I am testing the cephFS cluster so that clients could mount -t ceph. the cluster has 6 nodes, 3 mons (also mds), and 3 osds. All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1 10Gb nic with virtual ip (10.32.0.0) 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32. 140.

[ceph-users] changing my cluster network ip

2018-09-26 Thread Joshua Chen
Hello all, I am buinding my testing cluster with public_network and cluster_network interface. For some reason, the testing cluster need to do peer connection with my colleague's machines so it's better I change my original cluster_network from 172.20.x.x to 10.32.67.x. Now if I don't want to re

[ceph-users] customized ceph cluster name by ceph-deploy

2018-09-21 Thread Joshua Chen
Hi all, I am using ceph-deploy 2.0.1 to create my testing cluster by this command: ceph-deploy --cluster pescadores new --cluster-network 100.109.240.0/24 --public-network 10.109.240.0/24 cephmon1 cephmon2 cephmon3 but the --cluster pescadores (name of the cluster) doesn't seem to work. Anyo

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Joshua Chen
Dear all, I wonder how we could support VM systems with ceph storage (block device)? my colleagues are waiting for my answer for vmware (vSphere 5) and I myself use oVirt (RHEV). the default protocol is iSCSI. I know that openstack/cinder work well with ceph and proxmox (just heard) too. But cu

Re: [ceph-users] iSCSI over RBD

2018-01-06 Thread Joshua Chen
That is Awesome! and wonderful, Thanks for making this acl option available. Cheers Joshua On Sat, Jan 6, 2018 at 7:17 AM, Mike Christie wrote: > On 01/04/2018 09:36 PM, Joshua Chen wrote: > > Hello Michael, > > Thanks for the reply. > > I did check this

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
, Steven Vacaroaia wrote: > Hi Joshua, > > How did you manage to use iSCSI gateway ? > I would like to do that but still waiting for a patched kernel > > What kernel/OS did you use and/or how did you patch it ? > > Tahnsk > Steven > > On 4 January 2018 at 04:50, Jo

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
s bug and current status of chap/acl limitation. looking forwarding to this ACL function adding to gwcli. Cheers Joshua On Fri, Jan 5, 2018 at 12:47 AM, Michael Christie wrote: > On 01/04/2018 03:50 AM, Joshua Chen wrote: > > Dear all, > > Although I managed to run gwcli and c

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
10:55 AM, Joshua Chen wrote: > I had the same problem before, mine is CentOS, and when I created > /iscsi/create iqn_bla-bla > it goes > ocal LIO instance already has LIO configured with a target - unable to > continue > > > > then finally the solution happen

Re: [ceph-users] iSCSI over RBD

2018-01-03 Thread Joshua Chen
somehow they are doing the same thing, you need to disable 'target' service (targetcli) in order to allow gwcli (rbd-target-api) do it's job. Cheers Joshua On Thu, Jan 4, 2018 at 2:39 AM, Mike Christie wrote: > On 12/25/2017 03:13 PM, Joshua Chen wrote: > > Hello fol

[ceph-users] iSCSI over RBD

2017-12-25 Thread Joshua Chen
Hello folks, I am trying to share my ceph rbd images through iscsi protocol. I am trying iscsi-gateway http://docs.ceph.com/docs/master/rbd/iscsi-overview/ now systemctl start rbd-target-api is working and I could run gwcli (at a CentOS 7.4 osd node) gwcli /> ls o- /