On Feb 15, 8:46 pm, Steve Holden <st...@holdenweb.com> wrote: > Paul wrote: > > Hi, > > I currently have a webserver using BaseHttpServe that serves images > > like this: > > if self.path.endswith(".jpg"): > > print(curdir + sep + self.path) > > f = open(curdir + sep + self.path,"b") > > self.send_response(200) > > self.send_header('Content-type', 'image/jpg') > > self.end_headers() > > self.wfile.write(f.read()) > > f.close() > > return > > Whilst it works, it does take quite a while to load (approx 10secs for > > a 4mb file even though its over the local connection) - does anyone > > have any hints/tips for speeding it up? > > You could consider reading the file in smaller blocks and writing the > output in a loop. That way the next block of the file can be read in > while the network buffers are emptying. > > Just keep reading data and writing it until the number of data bytes you > wrote is fewer than the number you tried to read. > > regards > Steve > -- > Steve Holden +1 571 484 6266 +1 800 494 3119 > Holden Web LLC http://www.holdenweb.com/
Thanks for these but they seem to have made no difference - it still loads (visually) in chunks on the screen taking quite a while. Intrestingly, with the looping idea I got it to print out on each loop and some took a substantial amount of time to load compared to others (I tried using 1500 and then 10,000 size chunks,with the code being: if self.path.endswith(".jpg"): print(curdir + sep + self.path) f = open(curdir + sep + self.path,"rb") self.send_response(200) self.send_header('Content-type', 'image/jpg') self.end_headers() h = 10000 while h==10000: g = f.read(10000) h = len(g) print h self.wfile.write(g) #self.wfile.write(f.read()) #shutil.copyfileobj(f,self.wfile) f.close() return I also tried loading it to memory before it was requested but that also made no difference. If anyone's got any suggestions, I would be very greatful, Paul -- http://mail.python.org/mailman/listinfo/python-list