* Wietse Venema <postfix-users@postfix.org>: > Patrick Ben Koetter: > > * Viktor Dukhovni <postfix-users@postfix.org>: > > > On Mon, Jun 12, 2017 at 10:32:18AM +0200, Patrick Ben Koetter wrote: > > > > > > > * Julian Kippels <kipp...@hhu.de>: > > > > > would it be faster to have several smaller files for alias_maps and > > > > > transport_maps for each virtual domain, or have one giant file each > > > > > with > > > > > all users domain from all virtual domains in one file? Around 90% of > > > > > traffic is for one domain and the rest is split among 32 other domain. > > > > > > > > Hard to tell. If they are static, binary maps Postfix will read them > > > > all into > > > > memory and work with the in memory copies. So you don't gain any speed > > > > improvements from a giant file. > > > > > > A single CDB, LMDB or Berkeley DB file is much more efficient than > > > multiple smaller files. > > > > At which message throughput rate will this make a difference? > > Always. Because you're replacing hashing with linear search.
If you compare hashing to linear search, yes. But I am not sure this is what the OPs question was about? He wrote "would it be faster to have several smaller files (...) or have one giant file". The way I understood it, he would not compare hashing vs. linear search, but many small(er) hashed maps vs. one large hashed map. I understood the latter and that's why I came up with the question of "message throughput rate". The goal I am heading for is: If someone runs a platform at x msg/sec and x is below the threshold where message throughput rate sinks because of "too many small maps" why bother. Stick with many small maps if you gain any other advantage until then. p@rick -- [*] sys4 AG https://sys4.de, +49 (89) 30 90 46 64 Schleißheimer Straße 26/MG,80333 München Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263 Vorstand: Patrick Ben Koetter, Marc Schiffbauer, Wolfgang Stief Aufsichtsratsvorsitzender: Florian Kirstein