David Pratt <[EMAIL PROTECTED]> writes: > Hi. I have files that I will be importing in at least four different > plain text formats, one of them being tab delimited format, a couple > being token based uses pipes (but not delimited with pipes), another > being xml. There will likely be others as well but the data needs to > be extracted and rewritten to a single format. The files can be fairly > large (several MB) so I do not want to read the whole file into > memory. What approach would be recommended for sniffing the files for > the different text formats. I realize CSV module has a sniffer but it > is something that is limited more or less to delimited files. I have > a couple of ideas on what I could do but I am interested in hearing > from others on how they might handle something like this so I can > determine the best approach to take. Many thanks.
With GB memory machines being common, I wouldn't think twice about slurping a couple of meg into RAM to examine. But if that's to much, how about simply reading in the first <chunk> bytes, and checking that for the characters you want? <chunk> should be large enough to reveal what you need, but small enogh that your'e comfortable reading it in. I'm not sure that there aren't funny interactions between read and readline, so do be careful with that. Another approach to consider is libmagic. Google turns up a number of links to Python wrappers for it. <mike -- Mike Meyer <[EMAIL PROTECTED]> http://www.mired.org/home/mwm/ Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information. -- http://mail.python.org/mailman/listinfo/python-list