Lang Martin wrote:
The overhead is related to multi-byte characters being recognized by read-char, and the buzz reports that other systems incur similar overhead to handle utf-8. There was a comparison to python (and maybe perl as well) on a blog somewhere.
I had a look at this a while back (comparing the python, my previous favourite language). https://webmail.iro.umontreal.ca/pipermail/gambit-list/2006-September/000815... Basically it appears that python stores strings as byte-array + char encoding meta. If the output is the same char encoding as the input then it never gets translated - hence the speed.
l = file("bib").read() # takes 9ms in python (4.3M file)
- thats basically the same speed as a C fread() on my laptop.
Cheers,
Phil