On Tue, Sep 10, 2013 at 09:54:38AM +0300, Adrian Hunter wrote: > On 09/09/13 14:17, Peter Zijlstra wrote:
> > The only reason I wanted this is so that each thread can write its own > > data. The current one file thing is an immense bottle-neck for big > > machines. > > Do you need multiple files for that? Why not just feed writes to a pool of > threads? Not sure I understand what you mean there. Since we have a (or multiple) event per cpu it doesn't make sense to read that data from another cpu and have that write it to disk. That completely destroys the locality. Instead the suggestion was to have a thread per cpu (bound) to empty out the per cpu mmap buffer(s) and write them to disk. Now the only way to coherently do that into a single file is to do some streams implementation, which I suppose is possible. Pre-allocate large-ish sections per thread and mark them as such, then let the thread spool data into it, when full allocate a new section. However this would result in allocation-block-size*nr_cpus minimal file sizes, which wouldn't be a problem per-se since they can be sparse. It will however pretty much mess up Jolsa's max-MBs per file thing and it is of course slightly more complex than having a file per cpu/thread. And is more likely to create kernel lock contention; the pagecache locks are typically per inode, bouncing those around the machine isn't nice either. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/