https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208130
Bug ID: 208130 Summary: smbfs is slow because it (apparently) doesn't do any caching/buffering Product: Base System Version: 10.2-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: freebsd-bugs@FreeBSD.org Reporter: noah.bergba...@tum.de CC: freebsd-am...@freebsd.org CC: freebsd-am...@freebsd.org I set up an smbfs mount on FreeBSD 10.2-RELEASE today and was noticed that it's very slow. How slow? Some numbers: Reading a 600MB file from the share with dd reports around 1 MB/s while doing the same in a Linux VM running inside bhyve on this very same machine yields a whopping 100 MB/s. I conclude that the SMB server is irrelevant in this case. There's a recent [discussion](https://lists.freebsd.org/pipermail/freebsd-hackers/2015-November/048597.html) about this on freebsd-hackers which reveals an interesting detail: The situation can be improved massivley up to around 60MB/s on the FreeBSD side just by using a larger dd buffer size (e.g. 1MB). Interestingly, using very small buffers has only negligible impact on Linux (until the whole affair gets CPU-bottlenecked of course). I know little about SMB but a quick network traffic analysis gives some insights: FreeBSD's smbfs seems to translate every read() call from dd directly into an SMB request. So with a small buffer size of e.g. 1k, something like this seems to happen: * client requests 1k of data * client waits for a response (network round-trip) * client receives response * client hands data to dd which then issues another read() * client requests 1k of data * ... Note how we're spending most of our time waiting for network round-trips. Because a bigger buffer means larger SMB requests, this obviously leads to higher network saturation and less wasted time. I'm unable to spot a similar pattern on Linux. Here, a steady flow of data is maintained even with small buffer sizes, so apparently some caching/buffering must be happening. Linux's cifs has a "cache" option and indeed, disabling it produces exactly the same performance (and network) behavior I'm seeing on FreeBSD. So to sum things up: The fact that smbfs doesn't have anything like Linux's cache causes a 100-fold performance hit. Obviously, that's a problem. -- You are receiving this mail because: You are the assignee for the bug. _______________________________________________ freebsd-bugs@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-bugs To unsubscribe, send any mail to "freebsd-bugs-unsubscr...@freebsd.org"