On Apr 24, 2013, at 8:35 PM, Aldrich <fenghelongn...@gmail.com> wrote:

> as the program you can see ,the files` url I have identified,now my problem 
> is that if I do not know the files` url,or I just know the web` url ,how can 
> download the files in this website,for example the websitemirrors.163.com,I 
> know using ftp protocol can download multiple files.Can anyone help me deal 
> with this problem.

As I said earlier, HTTP, unlike FTP, has no universally-available method of 
getting a directory's contents, so you will have to figure out how to download 
the contents yourself, and then parse the results in code. Some sites print 
directory listings in HTML, some use WebDAV, and most intentionally obscure the 
underlying filesystem. The site you mentioned appears to do the first of those 
three, so your program will have to read the data from the Web site, and then 
parse the HTML for hyperlinks. Once you have the hyperlinks, you can use easy 
handles to fetch them. Good luck.

Nick Zitzmann
<http://www.chronosnet.com/>




-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Reply via email to