I have some stability in my lustre installation after many days of testing,
however df- h now reports the /home filesystem incorrectly.

After mounting the /home I get:
[root@n04 ~]# df -h
10.140.90.42@tcp:/lustre  286T   59T  228T  21% /lustre
10.140.90.42@tcp:/home    191T  153T   38T  81% /home

doing it again straight after, I get:

[root@n04 ~]# df -h
10.140.90.42@tcp:/lustre  286T   59T  228T  21% /lustre
10.140.90.42@tcp:/home     48T   40T  7.8T  84% /home

The 4 OSTs report as active and present:

[root@n04 ~]# lfs df
....
UUID                   1K-blocks        Used   Available Use% Mounted on
home-MDT0000_UUID     4473805696    41784064  4432019584   1% /home[MDT:0]
home-OST0000_UUID    51097753600 40560842752 10536908800  80% /home[OST:0]
home-OST0001_UUID    51097896960 42786978816  8310916096  84% /home[OST:1]
home-OST0002_UUID    51097687040 38293322752 12804362240  75% /home[OST:2]
home-OST0003_UUID    51097765888 42293640192  8804123648  83% /home[OST:3]

filesystem_summary:  204391103488 163934784512 40456310784  81% /home

[root@n04 ~]#
[root@n04 ~]# lfs osts
OBDS:
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID ACTIVE
3: lustre-OST0003_UUID ACTIVE
4: lustre-OST0004_UUID ACTIVE
5: lustre-OST0005_UUID ACTIVE
OBDS:
0: home-OST0000_UUID ACTIVE
1: home-OST0001_UUID ACTIVE
2: home-OST0002_UUID ACTIVE
3: home-OST0003_UUID ACTIVE
[root@n04 ~]#

Anyone seen this before? Reboots and remounts do not appear to change the
value. zfs pool is reporting as online and a scrub returns 0 errors.

Sid Young
_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to