On 5/5/25 12:58, Greg Hellings wrote:
zVerse is limited to a 65,535 byte cap per block... zText4 uses a 32-bit value

Kinda serious question:

In a world of Gbit net.links, memory size typically in tens of Gbytes, and multi-Tbyte storage, does module compression make any sense today?

- Nobody is using PDP-11s any more, and we're not struggling to transfer data over sloppy, error-prone 56kbps links. - Tiny handheld devices (i.e. smartphones) have 5G network access, 64-bit addressing, and -- minimally -- dozens of Gbytes of storage.

My main environment has 900 modules installed (because sooner or later I have to experiment with darn near everything, and it accretes), whose total space occupation is just 6.6Gbytes. This is not consequential storage today.

du -sb .sword
6659535239    /home/karl/.sword
ls .sword/mods.d/*.conf | wc -l
898
df -Th .
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/nvme1n1p7 ext4  702G  521G  181G  75% /home

Is compression's space savings actually worth it? Is the reduced I/O of reading a compressed module made up in increased complexity of handling?

The biggest text module currently being distributed (BSB, compressed) is <27Mbytes. Everything else is smaller than that.

Sword could be reimplemented to use mmap() to inhale entire uncompressed bibles into virtual memory in half a microsecond without causing the slightest grief, rather than spend time managing decompression needs.

Maybe let the VM system do the work instead. I sense that compression handling has become an anachronism.
_______________________________________________
sword-devel mailing list: sword-devel@crosswire.org
http://crosswire.org/mailman/listinfo/sword-devel
Instructions to unsubscribe/change your settings at above page

Reply via email to