I generate a list of numbers, write them to file, and test the time it takes to read it back. Attached is my file bench.scm
For most of them, plain_text appears to be faster; and only at 10 million entries, does the binary format become slightly faster. What am I doing wrong?
running with: gsi bench.scm
(number-of-entries: 1 (binary: 1.5497207641601562e-4) (plain: 9.679794311523438e-5))
(number-of-entries: 10 (binary: 8.487701416015625e-5) (plain: 6.699562072753906e-5))
(number-of-entries: 100 (binary: 1.399517059326172e-4) (plain: 1.6808509826660156e-4))
(number-of-entries: 1000 (binary: .0024831295013427734) (plain: .0011830329895019531))
(number-of-entries: 10000 (binary: .004712104797363281) (plain: .005532979965209961))
(number-of-entries: 100000 (binary: .06526494026184082) (plain: .04858684539794922))
(number-of-entries: 1000000 (binary: .8762490749359131) (plain: .55377197265625))
(number-of-entries: 10000000 (binary: 7.88455605506897) (plain: 9.257179021835327))
object->u8vector and u8vector->object
write-subu8vector and read-subu8vector on the file port
What I currently do on tcp/ip port is to first write a u8vector of
size 4 representing
the size of the object following and then write the u8vector
representing the object.
I guess you could do the same in files. You would read 4 u8
representing the size,
read-subu8vector of the size read and repeat until eof.