Hi Gambit List,
I needed a fast csv parser for parsing large (multi-gig) files so I've wrapped the libcsv c code by Robert Gamble[1]. The first cut of csv.scm is here: http://phildawes.net/2007/gambit-csv/0.1/csv.scm
Example usage:
(define it (csv-make-iterator fname))
(it) ; returns the first row as a list
(it) ; returns 2nd row ..etc..
; (it) returns '() when it hits the end of the file.
I'm very new to gambit/scheme - is this a reasonable interface or is there a more schemey idiom I should be presenting?
Cheers,
Phil
Afficher les réponses par date
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 8-Feb-07, at 4:03 PM, Phil Dawes wrote:
Hi Gambit List,
I needed a fast csv parser for parsing large (multi-gig) files so I've wrapped the libcsv c code by Robert Gamble[1]. The first cut of csv.scm is here: http://phildawes.net/2007/gambit-csv/0.1/csv.scm
Example usage:
(define it (csv-make-iterator fname))
(it) ; returns the first row as a list
(it) ; returns 2nd row ..etc..
; (it) returns '() when it hits the end of the file.
I'm very new to gambit/scheme - is this a reasonable interface or is there a more schemey idiom I should be presenting?
Interesting. But why would you do it in C (in 600 lines of code) when you can do it in 20 lines of Scheme?
(define (csv-make-iterator fname) (let ((port (open-input-file fname))) (lambda () (read-csv port))))
(define (read-all-csv port) (read-all port read-csv))
(define (read-csv port) (let ((line (read-line port))) (if (eof-object? line) line (split-csv line))))
(define (split-csv str) (call-with-input-string str (lambda (port) (read-all port (lambda (p) (read-line p #,))))))
(define it (csv-make-iterator "test"))
(it) => ("11" "22" "33") (it) => ("a" "b")
(call-with-input-string "11,22,33\na,b\n" read-all-csv) => (("11" "22" "33") ("a" "b"))
Marc
Marc Feeley wrote:
Interesting. But why would you do it in C (in 600 lines of code) when you can do it in 20 lines of Scheme?
Ignorance maybe! There's a little more to csv parsing than just splitting on ",", but having said that your code runs a lot quicker than I was expecting.
I had tried http://www.neilvandyke.org/csv-scm/ but this was really slow so turned my sights to wrapping a c library.
Thanks!
Phil
I replied to Phil privately, but since others have joined the conversation, I'll respond publicly. I use the code at pbewig.googlepages.com/ProcessingFieldedTextFiles.pdf to process comma-delimited files and other text-formatted databases. I've never timed it, or compared it to other code, but it's always been fast enough for what I wanted to do. Let me know if you find it useful.
Phil (another Phil, not the original poster)
On 2/9/07, Phil Dawes pdawes@users.sf.net wrote:
Marc Feeley wrote:
Interesting. But why would you do it in C (in 600 lines of code) when you can do it in 20 lines of Scheme?
Ignorance maybe! There's a little more to csv parsing than just splitting on ",", but having said that your code runs a lot quicker than I was expecting.
I had tried http://www.neilvandyke.org/csv-scm/ but this was really slow so turned my sights to wrapping a c library.
Thanks!
Phil _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 9-Feb-07, at 9:23 AM, Phil Bewig wrote:
I replied to Phil privately, but since others have joined the conversation, I'll respond publicly. I use the code at pbewig.googlepages.com/ProcessingFieldedTextFiles.pdf to process comma-delimited files and other text-formatted databases. I've never timed it, or compared it to other code, but it's always been fast enough for what I wanted to do. Let me know if you find it useful.
Phil (another Phil, not the original poster)
Nice! Do you think you could take the time to turn this into a Snow package? Shouldn't take more than 15 minutes.
Marc
But Marc, Snow isn't even released yet lol
Guillaume
Marc Feeley wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 9-Feb-07, at 9:23 AM, Phil Bewig wrote:
I replied to Phil privately, but since others have joined the conversation, I'll respond publicly. I use the code at pbewig.googlepages.com/ProcessingFieldedTextFiles.pdf to process comma-delimited files and other text-formatted databases. I've never timed it, or compared it to other code, but it's always been fast enough for what I wanted to do. Let me know if you find it useful.
Phil (another Phil, not the original poster)
Nice! Do you think you could take the time to turn this into a Snow package? Shouldn't take more than 15 minutes.
Marc
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.3 (Darwin)
iD8DBQFFzIjo//V9Zc2T/v4RAmM3AJ9YSVb2Ut0eVbGGNMAC+NTYLleXPgCgjUbg ZYBN0PpcjZCtQwxlNoTh+jg= =NNuJ -----END PGP SIGNATURE----- _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Hi All,
Ok - that prompted me to do a little speed testing on a large file (6000 odd records). Note that Marc's comma splitter doesn't actually do the parsing properly because many records are split over multiple lines, so I'm guessing this is an upper performance limit on pure scheme?.
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
I've pasted the timing code below.
Cheers,
Phil
(load "feeley-csv") (load "csv") (load "bewig-csv")
(define (test-feeley-speed) (let ((it (csv-make-iterator-feeley "data.csv"))) (let loop ((e (it))) (if (not (eof-object? e)) (loop (it))))))
(define (test-libcsv-speed) (let ((it (csv-make-iterator "data.csv"))) (let loop ((e (it))) (if (not (equal? e '())) (loop (it))))))
(define (test-bewig-speed) (read-all (open-input-file "data.csv") read-csv-record) #t)
(define (tester fn) (for-each (lambda (e) (##gc) (time (fn))) '(1 2 3 4 5)))
(tester test-feeley-speed) ; I get ~510 ms (tester test-libcsv-speed) ; I get ~240 ms (tester test-bewig-speed) ; I get ~1008 ms
Phil Bewig wrote:
I replied to Phil privately, but since others have joined the conversation, I'll respond publicly. I use the code at pbewig.googlepages.com/ProcessingFieldedTextFiles.pdf http://pbewig.googlepages.com/ProcessingFieldedTextFiles.pdf to process comma-delimited files and other text-formatted databases. I've never timed it, or compared it to other code, but it's always been fast enough for what I wanted to do. Let me know if you find it useful.
Phil (another Phil, not the original poster)
On 2/9/07, *Phil Dawes* <pdawes@users.sf.net mailto:pdawes@users.sf.net> wrote:
Marc Feeley wrote: > Interesting. But why would you do it in C (in 600 lines of code) when > you can do it in 20 lines of Scheme? > Ignorance maybe! There's a little more to csv parsing than just splitting on ",", but having said that your code runs a lot quicker than I was expecting. I had tried http://www.neilvandyke.org/csv-scm/ but this was really slow so turned my sights to wrapping a c library. Thanks! Phil _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca <mailto:Gambit-list@iro.umontreal.ca> https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list <https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list>
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
Hi All,
Ok - that prompted me to do a little speed testing on a large file (6000 odd records). Note that Marc's comma splitter doesn't actually do the parsing properly because many records are split over multiple lines, so I'm guessing this is an upper performance limit on pure scheme?.
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
I've pasted the timing code below.
Just a question---did you compile the Scheme code with the usual benchmark declarations
(declare (standard-bindings)(extended-bindings)(block) ;; basically R6RS (fixnum) (not safe)) ;; I presume there's only fixnum characters in a line ;-)
Brad
Hi Brad,
Bradley Lucier wrote:
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Just a question---did you compile the Scheme code with the usual benchmark declarations
(declare (standard-bindings)(extended-bindings)(block) ;; basically R6RS (fixnum) (not safe)) ;; I presume there's only fixnum characters in a line ;-)
Oops! - good point.
That brings the Phil Bewig parser down to ~950 ms (it didn't make much difference to the other two)
Cheers,
Phil
On Feb 9, 2007, at 5:08 PM, Phil Dawes wrote:
Bradley Lucier wrote:
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Just a question---did you compile the Scheme code with the usual benchmark declarations (declare (standard-bindings)(extended-bindings) (block) ;; basically R6RS (fixnum) (not safe)) ;; I presume there's only fixnum characters in a line ;-)
Oops! - good point.
That brings the Phil Bewig parser down to ~950 ms
Ah, not much.
What happens when you configure gambit with --enable-char-size=1 (the default is 4). (I presume the C you're using has one-byte chars.)
Or you could give a URL for your data for others to play with.
(Next I would suggest doing line reads and stepping through the characters locally without going through the trampoline required for each call to read-char. It seems that doing your own line buffering may be natural for this problem.)
Brad
I'm disappointed that my code is four times worse than C. Could someone more knowledgeable that I look at my code and tell me where the time is going?
Some comments:
1) Depending on your definition of csv files, reading the input a line-at-a-time may not work. My code allows newlines to be embedded within quoted fields, which has both benefits and costs. The benefit is that some data requires embedded newlines (think of multi-line addresses), and some csv parsers allow embedded newlines (for instance, MS Excel reads csv files with embedded newlines). The cost is that a missing quote can cause the entire remaining input to be sucked up in a single (very long) field. From a programming point of view, it means I almost have to handle the input a character at a time (or at least a buffer-load at a time, where a buffer is some fixed number of characters unrelated to the presence of newlines, but standard scheme provides no way to read a buffer-load).
2) My code correctly handles input with end-of-line marked by CR, LF, CR/LF, or LF/CR, all of which exist in the wild. Even ignoring the problem of embedded newlines, reading the input a line-at-a-time may constrain the choice of the end-of-line character to be the same as the system that is processing the data, which may be incorrect if the data comes from some other system.
3) As a further portability concern, my code allows for the case where the field separator is not a comma, which is common in those european countries where the decimal point is written as a comma instead of a period and the field separator is a semi-colon instead of a comma.
4) It would be good to know where the time is going. As a general rule, reading input one character at a time is expensive, even if it is necessary in this case; can we tell exactly how much that contributes to the runtime of the code? I also note that though I generate a lot of garbage, at any given collection the amount of live data is probably quite small (one input record, or less), so I expect garbage collection to be quite quick. Another possibility is that I used many small functions to implement the state machine, instead of a loop inside a single function; are function calls expensive?
Based on my very quick reading, it looks like libcsv doesn't allow embedded newlines, doesn't handle odd end-of-line conventions, and hard codes the comma as field separator. Perhaps my code is slower because it does all these things?
Insights appreciated.
Phil
On 2/9/07, Bradley Lucier <lucier@math.purdue.edu > wrote:
On Feb 9, 2007, at 5:08 PM, Phil Dawes wrote:
Bradley Lucier wrote:
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Just a question---did you compile the Scheme code with the usual benchmark declarations (declare (standard-bindings)(extended-bindings) (block) ;; basically R6RS (fixnum) (not safe)) ;; I presume there's only fixnum characters in a line ;-)
Oops! - good point.
That brings the Phil Bewig parser down to ~950 ms
Ah, not much.
What happens when you configure gambit with --enable-char-size=1 (the default is 4). (I presume the C you're using has one-byte chars.)
Or you could give a URL for your data for others to play with.
(Next I would suggest doing line reads and stepping through the characters locally without going through the trampoline required for each call to read-char. It seems that doing your own line buffering may be natural for this problem.)
Brad
I know that this has turned into yet another Scheme benchmarking extravaganza, but I wonder if anyone has considered trying to use PADS to solve the original problem?
I'd be curious to know how well it does (I've only read their papers, not looked at the system itself.)
Robby
On 2/9/07, Phil Bewig pbewig@gmail.com wrote:
I'm disappointed that my code is four times worse than C. Could someone more knowledgeable that I look at my code and tell me where the time is going?
Some comments:
- Depending on your definition of csv files, reading the input a
line-at-a-time may not work. My code allows newlines to be embedded within quoted fields, which has both benefits and costs. The benefit is that some data requires embedded newlines (think of multi-line addresses), and some csv parsers allow embedded newlines (for instance, MS Excel reads csv files with embedded newlines). The cost is that a missing quote can cause the entire remaining input to be sucked up in a single (very long) field. From a programming point of view, it means I almost have to handle the input a character at a time (or at least a buffer-load at a time, where a buffer is some fixed number of characters unrelated to the presence of newlines, but standard scheme provides no way to read a buffer-load).
- My code correctly handles input with end-of-line marked by CR, LF, CR/LF,
or LF/CR, all of which exist in the wild. Even ignoring the problem of embedded newlines, reading the input a line-at-a-time may constrain the choice of the end-of-line character to be the same as the system that is processing the data, which may be incorrect if the data comes from some other system.
- As a further portability concern, my code allows for the case where the
field separator is not a comma, which is common in those european countries where the decimal point is written as a comma instead of a period and the field separator is a semi-colon instead of a comma.
- It would be good to know where the time is going. As a general rule,
reading input one character at a time is expensive, even if it is necessary in this case; can we tell exactly how much that contributes to the runtime of the code? I also note that though I generate a lot of garbage, at any given collection the amount of live data is probably quite small (one input record, or less), so I expect garbage collection to be quite quick. Another possibility is that I used many small functions to implement the state machine, instead of a loop inside a single function; are function calls expensive?
Based on my very quick reading, it looks like libcsv doesn't allow embedded newlines, doesn't handle odd end-of-line conventions, and hard codes the comma as field separator. Perhaps my code is slower because it does all these things?
Insights appreciated.
Phil
On 2/9/07, Bradley Lucier <lucier@math.purdue.edu > wrote:
On Feb 9, 2007, at 5:08 PM, Phil Dawes wrote:
Bradley Lucier wrote:
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Just a question---did you compile the Scheme code with the usual benchmark declarations (declare (standard-bindings)(extended-bindings) (block) ;; basically R6RS (fixnum) (not safe)) ;; I
presume there's
only fixnum characters in a line ;-)
Oops! - good point.
That brings the Phil Bewig parser down to ~950 ms
Ah, not much.
What happens when you configure gambit with --enable-char-size=1 (the default is 4). (I presume the C you're using has one-byte chars.)
Or you could give a URL for your data for others to play with.
(Next I would suggest doing line reads and stepping through the characters locally without going through the trampoline required for each call to read-char. It seems that doing your own line buffering may be natural for this problem.)
Brad
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Feb 9, 2007, at 7:13 PM, Phil Bewig wrote:
I'm disappointed that my code is four times worse than C. Could someone more knowledgeable that I look at my code and tell me where the time is going?
Well, I'm not that guy. However, the buzz (from the internets, now in the back of my mind) seems to be that gambit is always ~4x slower if you read strings. gsc can be compiled with character width 1 (as someone said in this thread), and I've been stewing on a (with- lickity-split-strings ...) macro that replaces all of the string bits in code with u8-vectors, which, being single byte, should recover the speed.
The overhead is related to multi-byte characters being recognized by read-char, and the buzz reports that other systems incur similar overhead to handle utf-8. There was a comparison to python (and maybe perl as well) on a blog somewhere.
I'm a newbie, though, and I didn't try any code related to this before writing.
On Feb 9, 2007, at 8:15 PM, Lang Martin wrote:
However, the buzz (from the internets, now in the back of my mind) seems to be that gambit is always ~4x slower if you read strings. ...The overhead is related to multi-byte characters being recognized by read-char, and the buzz reports that other systems incur similar overhead to handle utf-8.
I think we get the factor of four for general string *processing*; there seems to be enough overhead in character I/O that the difference between reading one-byte characters and four byte characters is small.
Brad
Lang Martin wrote:
The overhead is related to multi-byte characters being recognized by read-char, and the buzz reports that other systems incur similar overhead to handle utf-8. There was a comparison to python (and maybe perl as well) on a blog somewhere.
I had a look at this a while back (comparing the python, my previous favourite language). https://webmail.iro.umontreal.ca/pipermail/gambit-list/2006-September/000815... Basically it appears that python stores strings as byte-array + char encoding meta. If the output is the same char encoding as the input then it never gets translated - hence the speed.
l = file("bib").read() # takes 9ms in python (4.3M file)
- thats basically the same speed as a C fread() on my laptop.
Cheers,
Phil
Phil Bewig wrote:
Based on my very quick reading, it looks like libcsv doesn't allow embedded newlines, doesn't handle odd end-of-line conventions, and hard codes the comma as field separator. Perhaps my code is slower because it does all these things?
Hi Phil,
I might be missing something, but I think it does everything except allow other field seperators. It also parses one character at a time. See the 'parse_csv' c function in: http://www.phildawes.net/2007/gambit-csv/0.1/csv.scm
Cheers,
Phil
I stand corrected.
It does read into a buffer, then parse one character at a time. Perhaps my function is slow because I read one character at a time instead of reading into a buffer.
Phil
On 2/10/07, Phil Dawes pdawes@users.sf.net wrote:
Phil Bewig wrote:
Based on my very quick reading, it looks like libcsv doesn't allow embedded newlines, doesn't handle odd end-of-line conventions, and hard codes the comma as field separator. Perhaps my code is slower because it does all these things?
Hi Phil,
I might be missing something, but I think it does everything except allow other field seperators. It also parses one character at a time. See the 'parse_csv' c function in: http://www.phildawes.net/2007/gambit-csv/0.1/csv.scm
Cheers,
Phil
OK, I've done a few experiments. I've added
(declare (standard-bindings)(extended-bindings)(block)(fixnum)(not safe))
to all files.
Here's the original code on my machine, an Opteron running in 64-bit mode, while giving it a 100 MB heap:
(load "bewig")
"/export/users/lucier/lang/scheme/csv/bewig.o7"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 591 ms real time 580 ms cpu time (497 user, 83 system) 2 collections accounting for 29 ms real time (25 user, 4 system) 264023440 bytes allocated 25865 minor faults no major faults #t
Next we buffer the collection of characters into a field, (re-)using an extensible string (really ugly code at bottom):
(load "bewig2")
"/export/users/lucier/lang/scheme/csv/bewig2.o17"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 436 ms real time 435 ms cpu time (420 user, 15 system) 1 collection accounting for 13 ms real time (13 user, 0 system) 45290944 bytes allocated 3067 minor faults no major faults #t
Next we "simply" use macro-read-char from _io.scm instead of read- char in the original code:
(load "bewig+macro-read-char")
"/export/users/lucier/lang/scheme/csv/bewig+macro-read-char.o2"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 175 ms real time 175 ms cpu time (171 user, 4 system) 3 collections accounting for 29 ms real time (29 user, 1 system) 264030352 bytes allocated 376 minor faults no major faults #t
Finally we use macro-read-char in the ugly, buffered-output code:
(load "bewig2+macro-read-char")
"/export/users/lucier/lang/scheme/csv/bewig2+macro-read-char.o2"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 105 ms real time 105 ms cpu time (98 user, 7 system) no collections 45270736 bytes allocated 3106 minor faults no major faults #t
So this is over five times as fast as the original code on this test file.
We gain a factor of about three just by inlining the fast path of read-char.
Marc, if you don't want Gambit to suck on simple IO processing like this, you should use the macro-read-char expansion for read-char in the compiler.
Brad
(define (read-csv-record . args)
(define (read-csv sep port)
(define (add-char-to-field c field) (let ((length (field-length field)) (buffer (field-buffer field))) (if (< length (string-length buffer)) (begin (string-set! buffer length c) (field-length-set! field (+ length 1)) field) (let ((new-buffer (string-append buffer (make-string length)))) (string-set! new-buffer length c) (field-length-set! field (+ length 1)) (field-buffer-set! field new-buffer) field))))
(define (extract-string-from-field! field) (let ((result (substring (field-buffer field) 0 (field-length field)))) (reset-field! field) result))
(define (new-field) (cons (make-string 800) 0))
(define (field-buffer field) (car field))
(define (field-buffer-set! field value) (set-car! field value))
(define (field-length field) (cdr field))
(define (field-length-set! field value) (set-cdr! field value))
(define (reset-field! field) (field-length-set! field 0) field)
(define (add-field! field fields) (cons (extract-string-from-field! field) fields))
(define (start field fields) (let ((c (read-char port))) (cond ((eof-object? c) (reverse fields)) ((char=? #\return c) (carriage-return field fields)) ((char=? #\newline c) (line-feed field fields)) ((char=? #" c) (quoted-field field fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields)))))
(define (not-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (cons "" fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? #" c) (quoted-field field fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields)))))
(define (quoted-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #" c) (may-be-doubled-quotes field fields)) (else (quoted-field (add-char-to-field c field) fields)))))
(define (may-be-doubled-quotes field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? #" c) (quoted-field (add-char-to-field #" field) fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields)))))
(define (unquoted-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields)) ) (else (unquoted-field (add-char-to-field c field) fields)))))
(define (carriage-return field fields) (if (char=? #\newline (peek-char port)) (read-char port)) fields)
(define (line-feed field fields) (if (char=? #\return (peek-char port)) (read-char port)) fields)
(if (eof-object? (peek-char port)) (peek-char port) (reverse (start (new-field) '()))))
(cond ((null? args) (read-csv #, (current-input-port))) ((and (null? (cdr args)) (char? (car args))) (read-csv (car args) (current-input-port))) ((and (null? (cdr args)) (port? (car args))) (read-csv #, (car args))) ((and (pair? (cdr args)) (null? (cddr args)) (char? (car args)) (port? (cadr args))) (read-csv (car args) (cadr args))) (else (car '()))))
Wow! I'm both impressed and sad. Impressed at the speed-up. Sad that something as simple as read-char can be so slow. And I don't find the fixed-up code nearly as ugly as the C code in libcsv.
Phil
On 2/10/07, Bradley Lucier lucier@math.purdue.edu wrote:
OK, I've done a few experiments. I've added
(declare (standard-bindings)(extended-bindings)(block)(fixnum)(not safe))
to all files.
Here's the original code on my machine, an Opteron running in 64-bit mode, while giving it a 100 MB heap:
(load "bewig")
"/export/users/lucier/lang/scheme/csv/bewig.o7"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 591 ms real time 580 ms cpu time (497 user, 83 system) 2 collections accounting for 29 ms real time (25 user, 4 system) 264023440 bytes allocated 25865 minor faults no major faults #t
Next we buffer the collection of characters into a field, (re-)using an extensible string (really ugly code at bottom):
(load "bewig2")
"/export/users/lucier/lang/scheme/csv/bewig2.o17"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 436 ms real time 435 ms cpu time (420 user, 15 system) 1 collection accounting for 13 ms real time (13 user, 0 system) 45290944 bytes allocated 3067 minor faults no major faults #t
Next we "simply" use macro-read-char from _io.scm instead of read- char in the original code:
(load "bewig+macro-read-char")
"/export/users/lucier/lang/scheme/csv/bewig+macro-read-char.o2"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 175 ms real time 175 ms cpu time (171 user, 4 system) 3 collections accounting for 29 ms real time (29 user, 1 system) 264030352 bytes allocated 376 minor faults no major faults #t
Finally we use macro-read-char in the ugly, buffered-output code:
(load "bewig2+macro-read-char")
"/export/users/lucier/lang/scheme/csv/bewig2+macro-read-char.o2"
(time (begin (read-all (open-input-file "data.csv") read-csv-
record) #t)) (time (begin (read-all (open-input-file "data.csv") read-csv-record) #t)) 105 ms real time 105 ms cpu time (98 user, 7 system) no collections 45270736 bytes allocated 3106 minor faults no major faults #t
So this is over five times as fast as the original code on this test file.
We gain a factor of about three just by inlining the fast path of read-char.
Marc, if you don't want Gambit to suck on simple IO processing like this, you should use the macro-read-char expansion for read-char in the compiler.
Brad
(define (read-csv-record . args)
(define (read-csv sep port)
(define (add-char-to-field c field) (let ((length (field-length field)) (buffer (field-buffer field))) (if (< length (string-length buffer)) (begin (string-set! buffer length c) (field-length-set! field (+ length 1)) field) (let ((new-buffer (string-append buffer (make-string length)))) (string-set! new-buffer length c) (field-length-set! field (+ length 1)) (field-buffer-set! field new-buffer) field)))) (define (extract-string-from-field! field) (let ((result (substring (field-buffer field) 0 (field-length
field)))) (reset-field! field) result))
(define (new-field) (cons (make-string 800) 0)) (define (field-buffer field) (car field)) (define (field-buffer-set! field value) (set-car! field value)) (define (field-length field) (cdr field)) (define (field-length-set! field value) (set-cdr! field value)) (define (reset-field! field) (field-length-set! field 0) field) (define (add-field! field fields) (cons (extract-string-from-field! field) fields)) (define (start field fields) (let ((c (read-char port))) (cond ((eof-object? c) (reverse fields)) ((char=? #\return c) (carriage-return field fields)) ((char=? #\newline c) (line-feed field fields)) ((char=? #\" c) (quoted-field field fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields))))) (define (not-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (cons "" fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? #\" c) (quoted-field field fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields))))) (define (quoted-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #\" c) (may-be-doubled-quotes field fields)) (else (quoted-field (add-char-to-field c field) fields))))) (define (may-be-doubled-quotes field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? #\" c) (quoted-field (add-char-to-field #\" field) fields)) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields))) (else (unquoted-field (add-char-to-field c field) fields))))) (define (unquoted-field field fields) (let ((c (read-char port))) (cond ((eof-object? c) (add-field! field fields)) ((char=? #\return c) (carriage-return '() (add-field! field fields))) ((char=? #\newline c) (line-feed '() (add-field! field fields))) ((char=? sep c) (let ((fields (add-field! field fields))) (not-field field fields)) ) (else (unquoted-field (add-char-to-field c field) fields))))) (define (carriage-return field fields) (if (char=? #\newline (peek-char port)) (read-char port)) fields) (define (line-feed field fields) (if (char=? #\return (peek-char port)) (read-char port)) fields) (if (eof-object? (peek-char port)) (peek-char port) (reverse (start (new-field) '()))))
(cond ((null? args) (read-csv #, (current-input-port))) ((and (null? (cdr args)) (char? (car args))) (read-csv (car args) (current-input-port))) ((and (null? (cdr args)) (port? (car args))) (read-csv #, (car args))) ((and (pair? (cdr args)) (null? (cddr args)) (char? (car args)) (port? (cadr args))) (read-csv (car args) (cadr args))) (else (car '()))))
Sorry for replying to my own message, but ...
I've suggested before to Marc that he inline at least the fast path to read-char, so that would be about (from _io.scm):
(define-prim (##read-char port)
(##declare (not interrupts-enabled))
(macro-port-mutex-lock! port) ; get exclusive access to port
(let loop ()
(let ((char-rlo (macro-character-port-rlo port)) (char-rhi (macro-character-port-rhi port))) (if (##fixnum.< char-rlo char-rhi)
; the next character is in the character read buffer
(let ((c (##string-ref (macro-character-port-rbuf port) char- rlo))) (if (##not (##char=? c #\newline))
; frequent simple case, just advance rlo
(begin (macro-character-port-rlo-set! port (##fixnum.+ char- rlo 1)) (macro-port-mutex-unlock! port) c) ....
plus some checking code that port is a character-input-port:
(define-prim (read-char #!optional (port (macro-absent-obj))) (macro-force-vars (port) (let ((p (if (##eq? port (macro-absent-obj)) (macro-current-input-port) port))) (macro-check-character-input-port p 1 (read-char p) (##read-char p)))))
Maybe that's still a good idea. The corresponding code in the standard C library is a macro that expands to something like this.
Brad
Bradley Lucier wrote:
On Feb 9, 2007, at 5:08 PM, Phil Dawes wrote:
That brings the Phil Bewig parser down to ~950 ms
Ah, not much.
What happens when you configure gambit with --enable-char-size=1 (the default is 4). (I presume the C you're using has one-byte chars.)
I haven't tried this, but have done a similar thing for text reading before without gaining much speed improvement (because you still incur byte->char translation). I suspect a faster approach is to do the reading in binary mode (i.e. u8vectors)?
Or you could give a URL for your data for others to play with.
Good idea - here's the data: http://www.phildawes.net/2007/gambit-csv/data.csv.gz (I can't remember where I first got it from, but it would have been from the web somewhere so hopefully I'm not breaking any copyright).
(Next I would suggest doing line reads and stepping through the characters locally without going through the trampoline required for each call to read-char. It seems that doing your own line buffering may be natural for this problem.)
I've done something similar in python before, using a fast regex library to first mask out escaped chars, then quoted strings. Then you can look at the last char in a line to see if it should be joined to the next line (i.e. the newline is within a quote). Note that libcsv doesn't actually copy the strings, it just callbacks with pointers to each field within the input buffer. (although my ffi code then does the copy and CONS' it into a list).
Cheers,
Phil
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
Ok - that prompted me to do a little speed testing on a large file (6000 odd records). Note that Marc's comma splitter doesn't actually do the parsing properly because many records are split over multiple lines, so I'm guessing this is an upper performance limit on pure scheme?.
wrapped libcsv ~240ms Marc comma splitter ~510ms Phil Bewig csv parser ~1008ms
Here's what I get with libcsv and the buffered-output version of Phil Bewig's code + expanding the fast-path of read-char inline; the scheme version seems about 10% slower.
Brad
libcsv:
(time (fn)) 95 ms real time 94 ms cpu time (77 user, 17 system) no collections 19938808 bytes allocated 6156 minor faults no major faults (time (fn)) 102 ms real time 101 ms cpu time (99 user, 2 system) no collections 19938808 bytes allocated 28 minor faults no major faults (time (fn)) 102 ms real time 100 ms cpu time (99 user, 1 system) no collections 19938808 bytes allocated 8 minor faults no major faults (time (fn)) 102 ms real time 100 ms cpu time (99 user, 1 system) no collections 19938808 bytes allocated 8 minor faults no major faults (time (fn)) 101 ms real time 101 ms cpu time (99 user, 2 system) no collections 19938808 bytes allocated 8 minor faults no major faults
bewig2+macro-read-char.scm:
(time (fn)) 112 ms real time 112 ms cpu time (99 user, 13 system) no collections 45280848 bytes allocated 3148 minor faults no major faults (time (fn)) 122 ms real time 122 ms cpu time (90 user, 32 system) no collections 45280848 bytes allocated 7886 minor faults no major faults (time (fn)) 111 ms real time 110 ms cpu time (91 user, 19 system) no collections 45280848 bytes allocated 4716 minor faults no major faults (time (fn)) 111 ms real time 111 ms cpu time (88 user, 23 system) no collections 45280848 bytes allocated 4716 minor faults no major faults (time (fn)) 111 ms real time 111 ms cpu time (95 user, 16 system) no collections 45280848 bytes allocated 4716 minor faults no major faults
On Feb 9, 2007, at 1:36 PM, Phil Dawes wrote:
Ok - that prompted me to do a little speed testing on a large file (6000 odd records).
wrapped libcsv ~240ms Phil Bewig csv parser ~1008ms
These two don't agree on what the fields should be in all cases. For example, on the 97th record, I get
("Rockhead's Comics & Games" "Brian Miller" "BGM00218" "2006 Formula D\303\251 Gen Con Tournament qualifier" "The official 4 round Formula D\303\251 Tournament. Oversized tracks and cars - bring your dice if you have a set. Special trophies for the top 3 spots! One of last year's biggest board game tournaments. This is a qualifier round, to advance to a semi-finals race on Saturday." "BGM - Board Game" "4" "2006-08-10 12:00:00" "Everyone (6+)" "Some Experience Needed" "Formula D\303\251" "all Advanced Rules except time trials" "4.50" "" "40" "12")
for bewig and
("Rockhead's Comics & Games" "Brian Miller" "BGM00218" "2006 Formula D\351 Gen Con Tournament qualifier" "The official 4 round Formula D\351 Tournament. Oversized tracks and cars - bring your dice if you have a set. Special trophies for the top 3 spots! One of last year's biggest board game tournaments. This is a qualifier round, to advance to a semi-finals race on Saturday." "BGM - Board Game" "4" "2006-08-10 12:00:00" "Everyone (6+)" "Some Experience Needed" "Formula D\351" "all Advanced Rules except time trials" "4.50" "" "40" "12")
for libcsv. I don't know which is correct (but the \303 characters are in the data file).
Brad