On 11-Feb-09, at 10:25 AM, Adrien Piérard wrote:
That is the problem: whereas the hyperspec is website where they kept it simple, GETting from the wiki means loads of useless informations (that is, what makes it a wiki: all those links on the left, and the links "edit this page" "discussions" on top, etc).
Therefor, I wonder if it's possible to dump a static image of the content of the wiki, that would generate some proper HTML with no hint that it comes from a wiki.
Say,
<html> <header> <title>foo</title> </header> <body> <h1>foo</h1> This is the <b>foo</b> procedure. It is a kind of <b><a href="bar.html">bar</a></b> </body> </html>
Which is a lot friendlier that what you'd get by GETting the wikipage (I once tried to get some pages of wikipedia, it's close to impossible to parse properly) And that could be packaged with along gambit in distributions/OSs, à la Hyperspec (though ours would be meant to evolve along with gambit, unlike hyperspec since CL is defined)
I already have a Gambit script to do that. Perhaps this can be added to the distribution if I clean it up a bit.
Marc