On Fri, Oct 24, 2025, at 6:32 PM, home user via users wrote:
> (addressing compression)
>
> Turning off compression is what I'm mainly wanting to do.
> I frequently make large copies from or to removable media.  That takes a 
> lot of time.  Compression and decompression will make that even longer.

Hypothetically it's faster because the compression means less reads, more CPU. 
It's a tradeoff. So if the source device is slower than CPU decompression, it's 
possible to improve performance. Or at least reduce write wear. We'd have to 
benchmark it per system and per workload to really know for sure.


> I've occasionally had to look into files with a hex editor.  Might 
> compression interfere with that?

It depends.  If you do:

dd if=/path/to/file count=2 2>/dev/null | hexdump -C

That will not show evidence of compression because the compression is 
transparent. All files appear to not be compressed as far as user space is 
concerned.

dd if=/dev/sdXY skip=$tosomeblock count=2 2>/dev/null |hexdump -C

Now you're reading blocks off the drive, outside the file system, and yes it 
will show compression but this is pretty tedious you need a specific physical 
address for this file which can only come from finding its logical address in 
Btrfs address space and then pass it through chunk tree lookup to find out what 
device and physical sector it's on (which is confusingly referred to as LBA 
because the drive also has logical addressing which we call physical sectors, 
but internally it does really have physical sectors but with an address space 
we have no access to).

It might be true data recovery is harder if the data is compressed, in that 
more data is potentially corrupt if there's a bit flip on compressed data than 
uncompressed data. It's surely better to mitigate this with backups than not 
compressing though.

There are also now fast compression options  for zstd in the kernel and btrfs 
supports them. Fedora hasn't changed the level of compression. Benchmarking 
might prove there's a better option than what we're using. The gotcha is, it 
may not even be a one size fits all choice, but workload and hardware specific.

I help triage most of the btrfs bug reports that get filed, and I can't 
remember even one bug related to compression. But there have been a number of 
memory bit flip and silent data corruption issues, so data checksumming is 
useful.



-- 
Chris Murphy
-- 
_______________________________________________
users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/[email protected]
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue

Reply via email to