Regarding #2 in comment #8 - I found that we can more or less do this
with few simple modifications to SQUASHFS_DECOMP_MULTI. The config
options are upper bounds on the number of decompressors and data cache
blocks. I tested this with the mounted-fs-memory-checker for comparison,
limiting squashfs to 1 data cache block and 4 decompressors per super
block (and with CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=1). Here's what I
got for the "heavy" filesystems on a 2-core VM:

size-0m.squashfs.xz.heavy
# num-mounted extra-memory delta
0: 39.45MB
1: 39.85MB (delta: 0.40MB)
2: 41.91MB (delta: 2.06MB)
3: 43.99MB (delta: 2.07MB)
4: 46.06MB (delta: 2.08MB)
size-1m.squashfs.xz.heavy
# num-mounted extra-memory delta
0: 39.45MB
1: 39.85MB (delta: 0.40MB)
2: 41.91MB (delta: 2.06MB)
3: 43.97MB (delta: 2.06MB)
4: 46.04MB (delta: 2.06MB)

I expect this is identical to what we'd get with the kernel from comment
#7, and is probably the minimum we can expect (2 * fs_block_size).

I want to do some performance comparison between these kernels and
4.4.0-47.68, and to get some idea as for how often squashfs has to fall
back to using the data cache rather than decompressing into the page
cache directly.

My most recent build (with one block in the data cache, one block in the
fragment cache, and a maximum of 4 parallel decompressors) can be found
at

http://people.canonical.com/~sforshee/lp1636847/linux-4.4.0-47.68+lp1636847v201611101005/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1636847

Title:
  unexpectedly large memory usage of mounted snaps

To manage notifications about this bug go to:
https://bugs.launchpad.net/snappy/+bug/1636847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to