Makia, The drive sizes are 7.6 TB which translates to about 6.9 TiB (which is the unit that zpool uses for "T"). So the zpool sizes as just 10 x 6.9T = 69T since zpool shows the total amount of disk space available to the pool. The usable space (which is what df is reporting) should be more like 0.8 x 69T = 55T. I am not sure about the discrepancy of 3T. Maybe that is due to some ZFS and/or Lustre overhead?
--Rick On 4/6/21, 3:49 PM, "lustre-discuss on behalf of Makia Minich" <[email protected] on behalf of [email protected]> wrote: I believe this was discussed a while ago, but I was unable to find clear answers, so I’ll re-ask in hopefully a slightly different way. On an OST, I have 30 drives, each at 7.6TB. I create 3 raidz2 zpools of 10 devices (ashift=12): [root@lustre47b ~]# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT oss55-0 69.9T 37.3M 69.9T - - 0% 0% 1.00x ONLINE - oss55-1 69.9T 37.3M 69.9T - - 0% 0% 1.00x ONLINE - oss55-2 69.9T 37.4M 69.9T - - 0% 0% 1.00x ONLINE - [root@lustre47b ~]# Running a mkfs.lustre against these (and the lustre mount) and I see: [root@lustre47b ~]# df -h | grep ost oss55-0/ost165 52T 27M 52T 1% /lustre/ost165 oss55-1/ost166 52T 27M 52T 1% /lustre/ost166 oss55-2/ost167 52T 27M 52T 1% /lustre/ost167 [root@lustre47b ~]# Basically, we’re seeing a pretty dramatic loss in capacity (156TB vs 209.7TB, so a loss of about 50TB). Is there any insight on where this capacity is disappearing to? If there some mkfs.lustre or zpool option I missed in creating this? Is something just reporting slightly off and that space really is there? Thanks. — Makia Minich Chief Architect System Fabric Works "Fabric Computing that Works” "Oh, I don't know. I think everything is just as it should be, y'know?” - Frank Fairfield _______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
