Linux 32bit 4.1.8 kernel , PowerPC , embedded , real time patch , systemV init. I had an ext4 file system mounted at /run/media/mmcblk0p1. I unmounted and erased it and then recreated it:
1) fdisk /dev/mmcblk0 -> select: n -> enter (choose default value for start) -> enter (choose default value for end) -> enter (choose default) -> enter (choose default) -> select: w (write changes to MBR) (pressed n , four times enter and w). 2) mkfs.ext4 -E nodiscard -F /dev/mmcblk0p1 After restart I got the following error: ``` EXT4-fs (mmcblk0p1): Filesystem with huge files cannot be mounted RDWR without CONFIG_LBDAF ``` Because I don't need huge_files support (files bigger than 2TB) I used: ``` mkfs.ext4 -O ^huge_file -E nodiscard -F /dev/mmcblk0p1 ``` and now the system works with /dev/mmcblk0p1 mounted at /run/media/mmcblk0p1. the problem is that since these actions , sometimes the boot takes a long time and sometimes not. I have no Idea why. **bad dmesg:** ``` [ 5.775060] udevd[872]: renamed network interface eth0 to fm2-gb0 [ 5.793777] udevd[873]: renamed network interface eth1 to fm2-gb1 [ 7.482486] EXT4-fs (mmcblk0p1): recovery complete [ 7.487834] EXT4-fs (mmcblk0p1): mounted filesystem with ordered data mode. Opts: (null) [ 7.728026] random: dd urandom read with 29 bits of entropy available [ 9.113007] random: nonblocking pool is initialized [ 43.058167] [rmStart]:The OSA version is: LINUXA4.1.0 from 16-04-2019 [ 43.094075] [rmStart] gpr: id: 0 - physAddr: 0x1a0000000 - len: 0x1000 - memAddr: 0xf1afe000 [ 43.104048] [rmStart] dcfg: id: 7 - physAddr: 0x1900e0000 - len: 0x1000 - memAddr: 0xf1c7a000 ``` **good dmesg:** ``` [ 5.773918] udevd[872]: renamed network interface eth0 to fm2-gb0 [ 5.792785] udevd[873]: renamed network interface eth1 to fm2-gb1 [ 7.508329] EXT4-fs (mmcblk0p1): recovery complete [ 7.513673] EXT4-fs (mmcblk0p1): mounted filesystem with ordered data mode. Opts: (null) [ 7.754070] random: dd urandom read with 29 bits of entropy available [ 9.699353] [rmStart]:The OSA version is: LINUXA4.1.0 from 16-04-2019 [ 9.710210] [rmStart] gpr: id: 0 - physAddr: 0x1a0000000 - len: 0x1000 - memAddr: 0xf1afe000 ``` **/etc/fstab:** ``` /dev/root / auto defaults 1 1 proc /proc proc defaults 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0 tmpfs /var/volatile tmpfs defaults 0 0 /dev/mmcblk0p1 /run/media/mmcblk0p1 ext4 defaults,async,noauto 0 0 ``` **tune2fs -l /dev/mmcblk0p1** ``` tune2fs 1.42.9 (28-Dec-2013) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 49ef04af-109b-433d-81b8-e5d42c947c8c Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file uninit_bg dir_nlink extra_isize Filesystem flags: unsigned_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 956592 Block count: 3825408 Reserved block count: 191270 Free blocks: 3723263 Free inodes: 956581 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 933 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8176 Inode blocks per group: 511 Flex block group size: 16 Filesystem created: Wed Dec 13 15:17:05 2017 Last mount time: Thu Jan 1 00:00:07 1970 Last write time: Thu Jan 1 00:00:07 1970 Mount count: 9 Maximum mount count: -1 Last checked: Wed Dec 13 15:17:05 2017 Check interval: 0 (<none>) Lifetime writes: 132 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d3fe2c1f-7dbb-43d5-b6d6-7b867b71386c Journal backup: inode blocks ``` *After a couple of prints I noticed the delay occurs at populate-volatile.sh* *in a specific line. I marked it at file's end.* *this is the script:* #!/bin/sh ### BEGIN INIT INFO # Provides: volatile # Required-Start: $local_fs # Required-Stop: $local_fs # Default-Start: S # Default-Stop: # Short-Description: Populate the volatile filesystem ### END INIT INFO echo "******populate-volatile start*******" # Get ROOT_DIR DIRNAME=`dirname $0` ROOT_DIR=`echo $DIRNAME | sed -ne 's:/etc/.*::p'` [ -e ${ROOT_DIR}/etc/default/rcS ] && . ${ROOT_DIR}/etc/default/rcS # When running populate-volatile.sh at rootfs time, disable cache. [ -n "$ROOT_DIR" ] && VOLATILE_ENABLE_CACHE=no # If rootfs is read-only, disable cache. [ "$ROOTFS_READ_ONLY" = "yes" ] && VOLATILE_ENABLE_CACHE=no CFGDIR="${ROOT_DIR}/etc/default/volatiles" TMPROOT="${ROOT_DIR}/var/volatile/tmp" COREDEF="00_core" [ "${VERBOSE}" != "no" ] && echo "Populating volatile Filesystems." create_file() { EXEC=" touch \"$1\"; chown ${TUSER}.${TGROUP} $1 || echo \"Failed to set owner -${TUSER}- for -$1-.\" >/dev/tty0 2>&1; chmod ${TMODE} $1 || echo \"Failed to set mode -${TMODE}- for -$1-.\" >/dev/tty0 2>&1 " test "$VOLATILE_ENABLE_CACHE" = yes && echo "$EXEC" >> /etc/volatile.cache.build [ -e "$1" ] && { [ "${VERBOSE}" != "no" ] && echo "Target already exists. Skipping." } || { if [ -z "$ROOT_DIR" ]; then eval $EXEC & else # Creating some files at rootfs time may fail and should fail, # but these failures should not be logged to make sure the do_rootfs # process doesn't fail. This does no harm, as this script will # run on target to set up the correct files and directories. eval $EXEC > /dev/null 2>&1 fi } } mk_dir() { EXEC=" mkdir -p \"$1\"; chown ${TUSER}.${TGROUP} $1 || echo \"Failed to set owner -${TUSER}- for -$1-.\" >/dev/tty0 2>&1; chmod ${TMODE} $1 || echo \"Failed to set mode -${TMODE}- for -$1-.\" >/dev/tty0 2>&1 " test "$VOLATILE_ENABLE_CACHE" = yes && echo "$EXEC" >> /etc/volatile.cache.build [ -e "$1" ] && { [ "${VERBOSE}" != "no" ] && echo "Target already exists. Skipping." } || { if [ -z "$ROOT_DIR" ]; then eval $EXEC else # For the same reason with create_file(), failures should # not be logged. eval $EXEC > /dev/null 2>&1 fi } } link_file() { EXEC=" if [ -L \"$2\" ]; then [ \"\$(readlink -f \"$2\")\" != \"\$(readlink -f \"$1\")\" ] && { rm -f \"$2\"; ln -sf \"$1\" \"$2\"; }; elif [ -d \"$2\" ]; then if awk '\$2 == \"$2\" {exit 1}' /proc/mounts; then cp -a $2/* $1 2>/dev/null; cp -a $2/.[!.]* $1 2>/dev/null; rm -rf \"$2\"; ln -sf \"$1\" \"$2\"; fi else ln -sf \"$1\" \"$2\"; fi " test "$VOLATILE_ENABLE_CACHE" = yes && echo " $EXEC" >> /etc/volatile.cache.build if [ -z "$ROOT_DIR" ]; then eval $EXEC & else # For the same reason with create_file(), failures should # not be logged. eval $EXEC > /dev/null 2>&1 fi } check_requirements() { cleanup() { rm "${TMP_INTERMED}" rm "${TMP_DEFINED}" rm "${TMP_COMBINED}" } CFGFILE="$1" [ `basename "${CFGFILE}"` = "${COREDEF}" ] && return 0 TMP_INTERMED="${TMPROOT}/tmp.$$" TMP_DEFINED="${TMPROOT}/tmpdefined.$$" TMP_COMBINED="${TMPROOT}/tmpcombined.$$" sed 's@\(^:\)*:.*@\1@' ${ROOT_DIR}/etc/passwd | sort | uniq > "${TMP_DEFINED}" cat ${CFGFILE} | grep -v "^#" | cut -s -d " " -f 2 > "${TMP_INTERMED}" cat "${TMP_DEFINED}" "${TMP_INTERMED}" | sort | uniq > "${TMP_COMBINED}" NR_DEFINED_USERS="`cat "${TMP_DEFINED}" | wc -l`" NR_COMBINED_USERS="`cat "${TMP_COMBINED}" | wc -l`" [ "${NR_DEFINED_USERS}" -ne "${NR_COMBINED_USERS}" ] && { echo "Undefined users:" diff "${TMP_DEFINED}" "${TMP_COMBINED}" | grep "^>" cleanup return 1 } sed 's@\(^:\)*:.*@\1@' ${ROOT_DIR}/etc/group | sort | uniq > "${TMP_DEFINED}" cat ${CFGFILE} | grep -v "^#" | cut -s -d " " -f 3 > "${TMP_INTERMED}" cat "${TMP_DEFINED}" "${TMP_INTERMED}" | sort | uniq > "${TMP_COMBINED}" NR_DEFINED_GROUPS="`cat "${TMP_DEFINED}" | wc -l`" NR_COMBINED_GROUPS="`cat "${TMP_COMBINED}" | wc -l`" [ "${NR_DEFINED_GROUPS}" -ne "${NR_COMBINED_GROUPS}" ] && { echo "Undefined groups:" diff "${TMP_DEFINED}" "${TMP_COMBINED}" | grep "^>" cleanup return 1 } # Add checks for required directories here cleanup return 0 } apply_cfgfile() { CFGFILE="$1" check_requirements "${CFGFILE}" || { echo "Skipping ${CFGFILE}" return 1 } cat ${CFGFILE} | grep -v "^#" | \ while read LINE; do eval `echo "$LINE" | sed -n "s/\(.*\)\ \(.*\) \(.*\)\ \(.*\)\ \(.*\)\ \(.*\)/TTYPE=\1 ; TUSER=\2; TGROUP=\3; TMODE=\4; TNAME=\5 TLTARGET=\6/p"` TNAME=${ROOT_DIR}${TNAME} [ "${VERBOSE}" != "no" ] && echo "Checking for -${TNAME}-." [ "${TTYPE}" = "l" ] && { TSOURCE="$TLTARGET" [ "${VERBOSE}" != "no" ] && echo "Creating link -${TNAME}- pointing to -${TSOURCE}-." link_file "${TSOURCE}" "${TNAME}" continue } [ -L "${TNAME}" ] && { [ "${VERBOSE}" != "no" ] && echo "Found link." NEWNAME=`ls -l "${TNAME}" | sed -e 's/^.*-> \(.*\)$/\1/'` echo ${NEWNAME} | grep -v "^/" >/dev/null && { TNAME="`echo ${TNAME} | sed -e 's@\(.*\)/.*@\1@'`/${NEWNAME}" [ "${VERBOSE}" != "no" ] && echo "Converted relative linktarget to absolute path -${TNAME}-." } || { TNAME="${NEWNAME}" [ "${VERBOSE}" != "no" ] && echo "Using absolute link target -${TNAME}-." } } case "${TTYPE}" in "f") [ "${VERBOSE}" != "no" ] && echo "Creating file -${TNAME}-." create_file "${TNAME}" & ;; "d") [ "${VERBOSE}" != "no" ] && echo "Creating directory -${TNAME}-." mk_dir "${TNAME}" # Add check to see if there's an entry in fstab to mount. ;; *) [ "${VERBOSE}" != "no" ] && echo "Invalid type -${TTYPE}-." continue ;; esac done return 0 } clearcache=0 exec 9</proc/cmdline while read line <&9 do case "$line" in *clearcache*) clearcache=1 ;; *) continue ;; esac done exec 9>&- if test -e ${ROOT_DIR}/etc/volatile.cache -a "$VOLATILE_ENABLE_CACHE" = "yes" -a "x$1" != "xupdate" -a "x$clearcache" = "x0" then sh ${ROOT_DIR}/etc/volatile.cache else rm -f ${ROOT_DIR}/etc/volatile.cache ${ROOT_DIR}/etc/volatile.cache.build for file in `ls -1 "${CFGDIR}" | sort`; do apply_cfgfile "${CFGDIR}/${file}" done ****This line occasionally (sometimes not) executes for a long time during boot**** * [ -e ${ROOT_DIR}/etc/volatile.cache.build ] && sync && mv ${ROOT_DIR}/etc/volatile.cache.build ${ROOT_DIR}/etc/volatile.cache* fi if [ -z "${ROOT_DIR}" ] && [ -f /etc/ld.so.cache ] && [ ! -f /var/run/ld.so.cache ] then ln -s /etc/ld.so.cache /var/run/ld.so.cache fi Any idea what can goes wrong and why sometimes booting takes long time and sometimes not?
-- _______________________________________________ yocto mailing list yocto@yoctoproject.org https://lists.yoctoproject.org/listinfo/yocto