On Fri, Aug 21, 2020 at 4:07 AM Gilberto Nunes <[email protected]>
wrote:
> Hi Sachidananda!
> I am trying to use the latest release of gstatus, but when I cut off one
> of the nodes, I get timeout...
>
I tried to reproduce, but couldn't. How did you cut off the node? I killed
all the gluster processes on one of the nodes and I see this.
You can see one of the bricks is shown as offline. And nodes are 2/3. Can
you please tell me the steps to reproduce the issue.
root@master-node:/mnt/gluster/movies# gstatus -a
Cluster:
Status: Degraded GlusterFS: 9dev
Nodes: 2/3 Volumes: 1/1
Volumes:
snap-1 Replicate Started (PARTIAL) -
1/2 Bricks Up
Capacity: (12.02%
used) 5.00 GiB/40.00 GiB (used/total)
Self-Heal:
slave-1:/mnt/brick1/snapr1/r11
(7 File(s) to heal).
Snapshots: 2
Name:
snap_1_today_GMT-2020.08.15-15.39.10
Status: Started
Created On: 2020-08-15 15:39:10 +0000
Name:
snap_2_today_GMT-2020.08.15-15.39.20
Status: Stopped
Created On: 2020-08-15 15:39:20 +0000
Bricks:
Distribute Group 1:
slave-1:/mnt/brick1/snapr1/r11
(Online)
slave-2:/mnt/brick1/snapr2/r22
(Offline)
Quota: Off
Note:
glusterd/glusterfsd is down in one or more nodes.
Sizes might not
be accurate.
root@master-node:/mnt/gluster/movies#
>
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users