1.The gluster server has set volume option nfs.disable to: off Volume Name: gv0 Type: Disperse Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gfs1:/brick1/gv0 Brick2: gfs2:/brick1/gv0 Brick3: gfs3:/brick1/gv0 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: off
2. The process has start. [root@gfs1 ~]# ps -ef | grep glustershd root 1117 1 0 10:12 ? 00:00:00 /usr/sbin/glusterfs -s localhost --volfile-id shd/gv0 -p /var/run/gluster/shd/gv0/gv0-shd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/ca97b99a29c04606.socket --xlator-option *replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208 --process-name glustershd --client-pid=-6 3.But the status of gv0 is not correct,for it's status of NFS Server is not online. [root@gfs1 ~]# gluster volume status gv0 Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1:/brick1/gv0 49154 0 Y 4180 Brick gfs2:/brick1/gv0 49154 0 Y 1222 Brick gfs3:/brick1/gv0 49154 0 Y 1216 Self-heal Daemon on localhost N/A N/A Y 1117 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on gfs2 N/A N/A Y 1138 NFS Server on gfs2 N/A N/A N N/A Self-heal Daemon on gfs3 N/A N/A Y 1131 NFS Server on gfs3 N/A N/A N N/A Task Status of Volume gv0 ------------------------------------------------------------------------------ There are no active volume tasks 4.So, I cann't mount the gv0 on my client. [root@kvms1 ~]# mount -t nfs gfs1:/gv0 /mnt/test mount.nfs: Connection refused Please Help! Thanks! [email protected]
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
