On Wed, Feb 07, 2007 at 03:00:05PM +0000, Chris Lale wrote: > Douglas Allan Tutty wrote: > >If we were to have a page clearly labled as GPL, would you be able to > >spit out an html of a wiki page any beter than we could pull off with a > >browser? > > I'm not quite sure what you mean. The wiki page is already HTML - the > source code is the wikitext. If you use DocBook, the web page is HTML > and the source code is the SGML/XML. > They are just web pages. You can view them in a browser, print them, > download them with wget, etc. Note that if you do the latter two, a > different CSS is involved so you see only the plain article and none of > the website stuff. Thoughtful design by Mediawiki eh? > Chris,
What I mean is that if we create a main wiki page with a separate wiki page for each major portion of the project instead of one HUGE wiki page, I can't get wget to follow links in the wiki page to grab those other pages. Take an example: MainPage(TOC) Chapter1-page Chapter2-page . . I can use wget or a browser on the TOC, Chapter1-page, and Chapter2-page, but I end up with three separate html pages not really linked (in fact, the links in the TOC still point to the wiki not the files I download with wget. At your end, could you create an html.gz of all three with functioning internal links? It would be great if this was automated and there was a button that someone could click to download a snapshot at any time. Doug. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]