I'm disappointed that my code is four times worse than C. Could someone more knowledgeable that I look at my code and tell me where the time is going?
Some comments:
1) Depending on your definition of csv files, reading the input a line-at-a-time may not work. My code allows newlines to be embedded within quoted fields, which has both benefits and costs. The benefit is that some data requires embedded newlines (think of multi-line addresses), and some csv parsers allow embedded newlines (for instance, MS Excel reads csv files with embedded newlines). The cost is that a missing quote can cause the entire remaining input to be sucked up in a single (very long) field. From a programming point of view, it means I almost have to handle the input a character at a time (or at least a buffer-load at a time, where a buffer is some fixed number of characters unrelated to the presence of newlines, but standard scheme provides no way to read a buffer-load).
2) My code correctly handles input with end-of-line marked by CR, LF, CR/LF, or LF/CR, all of which exist in the wild. Even ignoring the problem of embedded newlines, reading the input a line-at-a-time may constrain the choice of the end-of-line character to be the same as the system that is processing the data, which may be incorrect if the data comes from some other system.
3) As a further portability concern, my code allows for the case where the field separator is not a comma, which is common in those european countries where the decimal point is written as a comma instead of a period and the field separator is a semi-colon instead of a comma.
4) It would be good to know where the time is going. As a general rule, reading input one character at a time is expensive, even if it is necessary in this case; can we tell exactly how much that contributes to the runtime of the code? I also note that though I generate a lot of garbage, at any given collection the amount of live data is probably quite small (one input record, or less), so I expect garbage collection to be quite quick. Another possibility is that I used many small functions to implement the state machine, instead of a loop inside a single function; are function calls expensive?
Based on my very quick reading, it looks like libcsv doesn't allow embedded newlines, doesn't handle odd end-of-line conventions, and hard codes the comma as field separator. Perhaps my code is slower because it does all these things?
Insights appreciated.
Phil
On 2/9/07, Bradley Lucier <lucier@math.purdue.edu > wrote:
On Feb 9, 2007, at 5:08 PM, Phil Dawes wrote:
Bradley Lucier wrote:
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Just a question---did you compile the Scheme code with the usual benchmark declarations (declare (standard-bindings)(extended-bindings) (block) ;; basically R6RS (fixnum) (not safe)) ;; I presume there's only fixnum characters in a line ;-)
Oops! - good point.
That brings the Phil Bewig parser down to ~950 ms
Ah, not much.
What happens when you configure gambit with --enable-char-size=1 (the default is 4). (I presume the C you're using has one-byte chars.)
Or you could give a URL for your data for others to play with.
(Next I would suggest doing line reads and stepping through the characters locally without going through the trampoline required for each call to read-char. It seems that doing your own line buffering may be natural for this problem.)
Brad