On Mon, Dec 31, 2012 at 11:13 PM, Maxim Kammerer <m...@dee.su> wrote: > On Tue, Jan 1, 2013 at 2:10 AM, Alec Warner <anta...@gentoo.org> wrote: >> flatfile lookups are 2-4ms with hot cache. How much faster is the db >> option? > > I guess it depends on the implementation and how close is the system's > operational situation to an ideal one (whether swap started thrashing, > etc.). A DB is the proper solution that can be improved if necessary > (e.g., keeping often-used parts in RAM). Filesystem where it resides > can be offered hardware with lower seek time or better cache. But I > agree that it is easy to rationalize bad solutions. I don't like > waiting on an "ls -l" in addition to the system not being responsive > due to some other reason, though. But maybe I am expecting too much, > with even PolKit delegating each query to a full-blown Javascript > library nowadays. >
You realize that files are cached in RAM right? There's a page cache and pages are ejected when the system needs that RAM for something else and they're ejected in an LRU fashion. More than likely those pages are always in cache. I say pages very liberally here because most of the files we're dealing with are less than 4096 bytes (yep, I'm making that assumption) so its really 1 page per file. The result is that the request for the data (assuming mmap here) is handled by just doing a bounds/range check and converting the virtual address to the physical address the data is wired in. The time required to parse the average GNOME single user desktop machine (I've got 44 users and 69 groups on that box) is likely smaller than the overhead of a DB. -- Doug Goldstein