I have no problem to dealing with large amounts of files its just the underlaying filesystem is just not for storing large number of files in one directory. I speak about ext2/ext3 which I use...
Some threads about it: http://serverfault.com/questions/129953/maximum-number-of-files-in-one-ext3-directory-while-still-getting-acceptable-perf "My personal rule of thumb is to aim for a directory size of <= 20k files, although I've seen relatively decent performance with up to 100k files/directory." http://roopindersingh.com/2008/05/10/ext3-handling-large-number-of-files-in-a-directory/ On jún. 9, 12:40, Jason Brower <encomp...@gmail.com> wrote: > I wonder if it would be better to sort by type. > /uploads/table_name/field_name/ > Working with those images/files should be done from a database don't you > think? When I deal with large amounts of files I use the console. > BR, > Jason Brower > > On Tue, 2010-06-08 at 23:45 -0700, szimszon wrote: > > I wasn't able to continue the thread in > > http://groups.google.com/group/web2py/browse_frm/thread/a81248fec1dce... > > > So... > > > I imagine that I would have lots of files say some 10 000 or more. :) > > I think with ext3/ext2... filesystems so many files in one directory > > is a mess. > > > Is there absoute out of question to have upload/download to handle > > this issue in trunk? > > > I think of some kind of directory structure like one directory (say > > upload/0) has X number of files then the new one (upload/1) is created > > and the new files are stored in it... > > ...and download could handle it out of the box. > > > Or the generated filenames first or first two character is the > > directory name under upload/ and the file is stored under that > > directory... it could be a Field switch which defaults to the old > > behavior... > >