On 11/28/24 09:10, Juraj Lutter wrote:
Are there any differences in each pool’s properties? (zpool get all …)
Well, they are all different. There is a pool called leaf which is a
mirror of two disks on two SATA/SAS backplanes. There is proteus which
*was* working great over iSCSI and then the local pool t0 from the local
little nvme Samsung device. Which thankfully exists or the machine would
likely not boot at all.
titan# zpool list leaf
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
HEALTH ALTROOT
leaf 18.2T 900K 18.2T - - 0% 0% 1.00x
ONLINE -
titan#
titan# zpool status leaf
pool: leaf
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
leaf ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
titan#
titan# zpool get all leaf
NAME PROPERTY VALUE SOURCE
leaf size 18.2T -
leaf capacity 0% -
leaf altroot - default
leaf health ONLINE -
leaf guid 227678941907208615 -
leaf version - default
leaf bootfs - default
leaf delegation on default
leaf autoreplace off default
leaf cachefile - default
leaf failmode continue local
leaf listsnapshots off default
leaf autoexpand off default
leaf dedupratio 1.00x -
leaf free 18.2T -
leaf allocated 900K -
leaf readonly off -
leaf ashift 0 default
leaf comment - default
leaf expandsize - -
leaf freeing 0 -
leaf fragmentation 0% -
leaf leaked 0 -
leaf multihost off default
leaf checkpoint - -
leaf load_guid 6926439177379939855 -
leaf autotrim off default
leaf compatibility openzfs-2.0-freebsd local
leaf bcloneused 0 -
leaf bclonesaved 0 -
leaf bcloneratio 1.00x -
leaf dedup_table_size 0 -
leaf dedup_table_quota auto default
leaf feature@async_destroy enabled local
leaf feature@empty_bpobj enabled local
leaf feature@lz4_compress active local
leaf feature@multi_vdev_crash_dump enabled local
leaf feature@spacemap_histogram active local
leaf feature@enabled_txg active local
leaf feature@hole_birth active local
leaf feature@extensible_dataset active local
leaf feature@embedded_data active local
leaf feature@bookmarks enabled local
leaf feature@filesystem_limits enabled local
leaf feature@large_blocks enabled local
leaf feature@large_dnode enabled local
leaf feature@sha512 enabled local
leaf feature@skein enabled local
leaf feature@edonr disabled local
leaf feature@userobj_accounting enabled local
leaf feature@encryption enabled local
leaf feature@project_quota enabled local
leaf feature@device_removal enabled local
leaf feature@obsolete_counts enabled local
leaf feature@zpool_checkpoint enabled local
leaf feature@spacemap_v2 active local
leaf feature@allocation_classes enabled local
leaf feature@resilver_defer enabled local
leaf feature@bookmark_v2 enabled local
leaf feature@redaction_bookmarks enabled local
leaf feature@redacted_datasets enabled local
leaf feature@bookmark_written enabled local
leaf feature@log_spacemap active local
leaf feature@livelist enabled local
leaf feature@device_rebuild enabled local
leaf feature@zstd_compress active local
leaf feature@draid disabled local
leaf feature@zilsaxattr disabled local
leaf feature@head_errlog disabled local
leaf feature@blake3 disabled local
leaf feature@block_cloning disabled local
leaf feature@vdev_zaps_v2 disabled local
leaf feature@redaction_list_spill disabled local
leaf feature@raidz_expansion disabled local
leaf feature@fast_dedup disabled local
leaf feature@longname disabled local
leaf feature@large_microzap disabled local
titan#
Nothing of interest there other than the blank cachefile which I can not
set to anything. At least it seems to reject my attempts to set it.
titan#
titan# zpool status proteus
pool: proteus
state: ONLINE
scan: scrub repaired 0B in 00:53:43 with 0 errors on Mon Jul 1
18:56:34 2024
config:
NAME STATE READ WRITE CKSUM
proteus ONLINE 0 0 0
da0p1 ONLINE 0 0 0
errors: No known data errors
titan#
titan# camcontrol devlist | grep 'FREEBSD'
<FREEBSD CTLDISK 0001> at scbus8 target 0 lun 0 (da0,pass5)
titan#
titan# zpool get all proteus
NAME PROPERTY VALUE
SOURCE
proteus size 1.98T -
proteus capacity 17% -
proteus altroot -
default
proteus health ONLINE -
proteus guid 4488185358894371950 -
proteus version -
default
proteus bootfs -
default
proteus delegation on
default
proteus autoreplace on local
proteus cachefile -
default
proteus failmode continue local
proteus listsnapshots off
default
proteus autoexpand off
default
proteus dedupratio 1.00x -
proteus free 1.63T -
proteus allocated 361G -
proteus readonly off -
proteus ashift 0
default
proteus comment -
default
proteus expandsize - -
proteus freeing 0 -
proteus fragmentation 1% -
proteus leaked 0 -
proteus multihost off
default
proteus checkpoint - -
proteus load_guid 3646341449300914421 -
proteus autotrim off
default
proteus compatibility openzfs-2.0-freebsd local
proteus bcloneused 0 -
proteus bclonesaved 0 -
proteus bcloneratio 1.00x -
proteus dedup_table_size 0 -
proteus dedup_table_quota auto
default
proteus feature@async_destroy enabled local
proteus feature@empty_bpobj active local
proteus feature@lz4_compress active local
proteus feature@multi_vdev_crash_dump enabled local
proteus feature@spacemap_histogram active local
proteus feature@enabled_txg active local
proteus feature@hole_birth active local
proteus feature@extensible_dataset active local
proteus feature@embedded_data active local
proteus feature@bookmarks enabled local
proteus feature@filesystem_limits enabled local
proteus feature@large_blocks enabled local
proteus feature@large_dnode enabled local
proteus feature@sha512 active local
proteus feature@skein enabled local
proteus feature@edonr disabled local
proteus feature@userobj_accounting active local
proteus feature@encryption enabled local
proteus feature@project_quota active local
proteus feature@device_removal enabled local
proteus feature@obsolete_counts enabled local
proteus feature@zpool_checkpoint enabled local
proteus feature@spacemap_v2 active local
proteus feature@allocation_classes enabled local
proteus feature@resilver_defer enabled local
proteus feature@bookmark_v2 enabled local
proteus feature@redaction_bookmarks enabled local
proteus feature@redacted_datasets enabled local
proteus feature@bookmark_written enabled local
proteus feature@log_spacemap active local
proteus feature@livelist enabled local
proteus feature@device_rebuild enabled local
proteus feature@zstd_compress active local
proteus feature@draid disabled local
proteus feature@zilsaxattr disabled local
proteus feature@head_errlog disabled local
proteus feature@blake3 disabled local
proteus feature@block_cloning disabled local
proteus feature@vdev_zaps_v2 disabled local
proteus feature@redaction_list_spill disabled local
proteus feature@raidz_expansion disabled local
proteus feature@fast_dedup disabled local
proteus feature@longname disabled local
proteus feature@large_microzap disabled local
titan#
Again here we see cachefile is blank.
Lastly there is the little samsung nvme bootable device :
titan#
titan# zpool list t0
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
HEALTH ALTROOT
t0 444G 91.2G 353G - - 27% 20% 1.00x
ONLINE -
titan#
titan# zpool status t0
pool: t0
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not
support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:44 with 0 errors on Wed Feb 7
09:56:40 2024
config:
NAME STATE READ WRITE CKSUM
t0 ONLINE 0 0 0
nda0p3 ONLINE 0 0 0
errors: No known data errors
titan#
titan# zpool get all t0
NAME PROPERTY VALUE SOURCE
t0 size 444G -
t0 capacity 20% -
t0 altroot - default
t0 health ONLINE -
t0 guid 2604455524152494878 -
t0 version - default
t0 bootfs t0/ROOT/default local
t0 delegation on default
t0 autoreplace off default
t0 cachefile - default
t0 failmode wait default
t0 listsnapshots off default
t0 autoexpand off default
t0 dedupratio 1.00x -
t0 free 353G -
t0 allocated 91.2G -
t0 readonly off -
t0 ashift 0 default
t0 comment - default
t0 expandsize - -
t0 freeing 0 -
t0 fragmentation 27% -
t0 leaked 0 -
t0 multihost off default
t0 checkpoint - -
t0 load_guid 5797689675549497497 -
t0 autotrim off default
t0 compatibility off default
t0 bcloneused 12K -
t0 bclonesaved 12K -
t0 bcloneratio 2.00x -
t0 dedup_table_size 0 -
t0 dedup_table_quota auto default
t0 feature@async_destroy enabled local
t0 feature@empty_bpobj active local
t0 feature@lz4_compress active local
t0 feature@multi_vdev_crash_dump enabled local
t0 feature@spacemap_histogram active local
t0 feature@enabled_txg active local
t0 feature@hole_birth active local
t0 feature@extensible_dataset active local
t0 feature@embedded_data active local
t0 feature@bookmarks enabled local
t0 feature@filesystem_limits enabled local
t0 feature@large_blocks enabled local
t0 feature@large_dnode enabled local
t0 feature@sha512 active local
t0 feature@skein enabled local
t0 feature@edonr enabled local
t0 feature@userobj_accounting active local
t0 feature@encryption enabled local
t0 feature@project_quota active local
t0 feature@device_removal enabled local
t0 feature@obsolete_counts enabled local
t0 feature@zpool_checkpoint enabled local
t0 feature@spacemap_v2 active local
t0 feature@allocation_classes enabled local
t0 feature@resilver_defer enabled local
t0 feature@bookmark_v2 enabled local
t0 feature@redaction_bookmarks enabled local
t0 feature@redacted_datasets enabled local
t0 feature@bookmark_written enabled local
t0 feature@log_spacemap active local
t0 feature@livelist enabled local
t0 feature@device_rebuild enabled local
t0 feature@zstd_compress active local
t0 feature@draid enabled local
t0 feature@zilsaxattr active local
t0 feature@head_errlog active local
t0 feature@blake3 enabled local
t0 feature@block_cloning active local
t0 feature@vdev_zaps_v2 active local
t0 feature@redaction_list_spill enabled local
t0 feature@raidz_expansion enabled local
t0 feature@fast_dedup disabled local
t0 feature@longname disabled local
t0 feature@large_microzap disabled local
titan#
There is nothing of interest in the properties other than the absent
cachefile setting. However I guess I could try to delete the previous
cache files seen in /etc/rc.d/zpool :
titan# cat /etc/rc.d/zpool
#!/bin/sh
#
#
# PROVIDE: zpool
# REQUIRE: hostid disks
# BEFORE: mountcritlocal
# KEYWORD: nojail
. /etc/rc.subr
name="zpool"
desc="Import ZPOOLs"
rcvar="zfs_enable"
start_cmd="zpool_start"
required_modules="zfs"
zpool_start()
{
local cachefile
for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
if [ -r $cachefile ]; then
zpool import -c $cachefile -a -N
if [ $? -ne 0 ]; then
echo "Import of zpool cache
${cachefile} failed," \
"will retry after root mount hold
release"
root_hold_wait
zpool import -c $cachefile -a -N
fi
break
fi
done
}
load_rc_config $name
run_rc_command "$1"
titan#
titan#
titan#
titan# ls -l /etc/zfs/zpool.cache /boot/zfs/zpool.cache
-rw-r--r-- 1 root wheel 1424 Jan 16 2024 /boot/zfs/zpool.cache
-rw-r--r-- 1 root wheel 4960 Nov 28 14:15 /etc/zfs/zpool.cache
titan#
May as well delete them. I have nothing to lose at this point.
--
--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken