Thanks for the long reply.  :)  It is frustrating that some things are designed 
really well and other bits are painful, like getting a listing of files...

ok, so, I did take a quick look at my code and one way of fixing the Xiphos 
listing, for example, is by not trying to parse the file sizes.  We could trust 
the module size provided in the module conf (which isn't always provided and is 
sometimes wrong!) and then simply have a total size indicator rather than have 
an indicator for each file downloaded in the module.  I think that may also 
work for bible.org?  Would be an option, and then we'd need to check to see if 
the file size returned by FTPTransport::getDirList() is zero, and if it is, use 
the total size provided by the module conf.  But, I believe there is a way of 
telling the size of a file being retrieved via HTTP GET?  hopefully we could 
use that as well?  :)

Anyway, just a quick thought.  Of course, I'd prefer that we didn't try to 
parse a file listing like this . . .  and would rather we used the 
mods.d.tar.gz file & module ZIP files, but more on that in another email.  :)


Thanks, ybic
        nic...  :)

ps:  I wouldn't be against someone submitting something to w3c, but I wouldn't 
be holding my breath for it to be implemented, let alone approved...  :(  :(

On 06/11/2010, at 10:15 PM, Troy A. Griffitts wrote:

> On 11/06/2010 04:36 AM, Nic Carter wrote:
> 
>> I initially submitted a patch for HTTP parsing, but it only
>> works for CrossWire and not for the Bible.org nor for the Xiphos
>> repos, and I have no intention of modifying the parsing code even
>> more in order to try to support more web servers!
> 
> :) thanks for the patch!  Yeah, surprisingly even FTP directory parsing
> is painful.  Even libCURL doesn't have an FTP directory listing parse
> function.  I couldn't believe that when I wrote the FTP code!  We found
> a portable library call ftpparse which parses directory listings for us.
> When doing the HTTP transport, I was hopeful we might find an
> httpdirparse or something :)  But no such luck, as of yet.
> 
> We were talking on #sword the other day about how odd it is that there
> is no w3c standard for the obvious use case:
> 
> Browse a hierarchy of folders+resources and retrieve some.
> 
> I brought this up with a frequent member of w3c committees and he
> suggested we develop a silly stupid minimal schema to represent a
> resource tree and a) submit it for w3c approval, and b) submit updates
> for Apache and IIS to update their Folder Index listings to comply to
> the proposal.  e.g., something like,
> 
> <?stylesheet href="apache_look_and_feel.css"?>
> <folder name="My Documents">
>  <resource
>    type="file" mimetype="application/msword" name="War Of the Worlds.doc"/>
> </folder>
> 
> Then, end users wouldn't notice a difference, and we could have a
> standard to easily parse.  As always, you know who is always in the
> details: attributes for permissions, mtime...; do you make the whole
> subdirectory hierarchy available from a directory request, or just the
> immediate children...
> 
> Anyway, we can dream of a bold new Internet where everything is
> standardized and straightforward for developers... :)  ahhhhh.
> 
> 
> Troy
> 
> _______________________________________________
> sword-devel mailing list: sword-devel@crosswire.org
> http://www.crosswire.org/mailman/listinfo/sword-devel
> Instructions to unsubscribe/change your settings at above page


_______________________________________________
sword-devel mailing list: sword-devel@crosswire.org
http://www.crosswire.org/mailman/listinfo/sword-devel
Instructions to unsubscribe/change your settings at above page

Reply via email to