[gambit-list] man _gambit func_

Adrien Piérard pierarda at iro.umontreal.ca
Wed Feb 11 10:25:16 EST 2009


>
> Also, looking at slime may be nice, the hyperspec integrates quite well
>> (should it be or not installed locally).
>> Can the wiki be fetched dumped on a parsable format on a local machine
>> (say, HTML)?
>>
>
> What do you mean?  A simple GET (with curl or Gambit) will get you the HTML
> of a page.
>

That is the problem: whereas the hyperspec is website where they kept it
simple, GETting from the wiki means loads of useless informations (that is,
what makes it a wiki: all those links on the left, and the links "edit this
page" "discussions" on top, etc).

Therefor, I wonder if it's possible to dump a static image of the content of
the wiki, that would generate some proper HTML with no hint that it comes
from a wiki.

Say,
<html>
  <header>
    <title>foo</title>
  </header>
  <body>
  <h1>foo</h1>
  This is the <b>foo</b> procedure. It is a kind of <b><a
href="bar.html">bar</a></b>
  </body>
</html>

Which is a lot friendlier that what you'd get by GETting the wikipage (I
once tried to get some pages of wikipedia, it's close to impossible to parse
properly)
And that could be packaged with along gambit in distributions/OSs, à la
Hyperspec (though ours would be meant to evolve along with gambit, unlike
hyperspec since CL is defined)

P!

-- 
Français, English, 日本語, 한국어
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.iro.umontreal.ca/pipermail/gambit-list/attachments/20090212/1bf23c00/attachment.htm>


More information about the Gambit-list mailing list