Hello everyone,
I am working on ICFP programming contest. There is something I am really confused about.
Supposed that I have an input port, and I want to read the next 8 bytes as an IEEE 754 double value or a 32-bit unsigned integer, is there anyway to do this in Gambit? I mean, they are not string with digit '0' - '9', but actual representation of the number.
Secondly, is there anyway to pass an u8vector to a C-lambda, as a char* or whatever? The C Interface section of the manual only provides information about character string, but not u8vector. Is it possible to pass u8vector as char*?
Thank you very much, Lam Luu
Afficher les réponses par date
On 26-Jun-09, at 10:37 PM, Lam Luu wrote:
Hello everyone,
I am working on ICFP programming contest. There is something I am really confused about.
Supposed that I have an input port, and I want to read the next 8 bytes as an IEEE 754 double value or a 32-bit unsigned integer, is there anyway to do this in Gambit? I mean, they are not string with digit '0'
- '9', but actual representation of the number.
Secondly, is there anyway to pass an u8vector to a C-lambda, as a char* or whatever? The C Interface section of the manual only provides information about character string, but not u8vector. Is it possible to pass u8vector as char*?
The code below will write and read 64 bit floats. The IEEE 754 representation is not guaranteed, but on most machines that is what you'll get. Beware that the endianness will be determined by the processor's native endianness.
To pass a u8vector to a c-lambda do this:
(define foo (c-lambda (scheme-object) int "char *ptr = ___CAST(char*,___BODY(___arg1)); ___result = ptr[0] + ptr[1] + ptr[2];"))
(pp (foo (u8vector 10 20 30))) ;; prints 60
Good luck with the contest!
Marc
(define u8vector-subtype (##subtype (u8vector))) (define f64vector-subtype (##subtype (f64vector)))
(define (write-f64 x port) (let ((v (f64vector x))) (##subtype-set! v u8vector-subtype) (write-subu8vector v 0 (u8vector-length v) port)))
(define (read-f64 port) (let ((v (f64vector 0.0))) (##subtype-set! v u8vector-subtype) (let ((n (read-subu8vector v 0 (u8vector-length v) port))) (if (= n (u8vector-length v)) (begin (##subtype-set! v f64vector-subtype) (f64vector-ref v 0)) #!eof))))
(call-with-output-file "f64test" (lambda (port) (write-f64 -1.5 port) (write-f64 +inf.0 port) (write-f64 3.1415926 port)))
(call-with-input-file "f64test" (lambda (port) (let* ((a (read-f64 port)) (b (read-f64 port)) (c (read-f64 port)) (d (read-f64 port))) (pp (list a b c d)))))
Ah, you beat me to it. Here's my code, with a few tests.
Brad
;; Obviously, no error checking, etc. ;; You can use read-u8 to read the bytes from a port ;; and stick them into bytevector (or do something similar ;; with the bytes directly)
;; these procedures names and calling sequences taken from the bytevector ;; library of R6RS. Unfortunately, IEEE 754 does not say what the ;; specific bit sequences representing the numbers are supposed to be, ;; but usually there are only two ways that machines do it.
(define (bytevector-ieee-double-native-ref bytevector k ) ;; extracts a double (with native endianness) from bytevector, ;; which I take in gambit to be a u8vector, from the positions ;; k, k+1, ..., k+7 (let ((aliased-vector (f64vector 0.))) (do ((i 0 (+ i 1))) ((= i 8) (f64vector-ref aliased-vector 0)) (##u8vector-set! aliased-vector i (u8vector-ref bytevector (+ k i))))))
(define (bytevector-ieee-double-native-set! bytevector k x ) ;; inserts a double (with native endianness) into bytevector, ;; which I take in gambit to be a u8vector, into the positions ;; k, k+1, ..., k+7 (let ((aliased-vector (f64vector x))) (do ((i 0 (+ i 1))) ((= i 8)) (u8vector-set! bytevector (+ k i) (##u8vector-ref aliased- vector i)))))
#|
On my powerpc Mac portable, result is
(load "binary.scm")
63 191 -1. 240 248 -1.5 "/Users/lucier/Desktop/binary.scm"
On my Intel box, the sign bit is on the other end (everything is reversed) so you get
(load "binary.scm")
0 128 1.0000000000000284 0 8 1.0000000000004832
|#
(define bytevector (make-u8vector 8 0))
(bytevector-ieee-double-native-set! bytevector 0 1. )
(display (u8vector-ref bytevector 0)) (newline)
(u8vector-set! bytevector 0 (bitwise-ior 128 (u8vector-ref bytevector 0)))
(display (u8vector-ref bytevector 0)) (newline)
(display (bytevector-ieee-double-native-ref bytevector 0))(newline)
(display (u8vector-ref bytevector 1)) (newline)
(u8vector-set! bytevector 1 (+ 8 (u8vector-ref bytevector 1)))
(display (u8vector-ref bytevector 1)) (newline)
(display (bytevector-ieee-double-native-ref bytevector 0))(newline)
On 27-Jun-09, at 12:17 AM, Marc Feeley wrote:
The code below will write and read 64 bit floats. The IEEE 754 representation is not guaranteed, but on most machines that is what you'll get. Beware that the endianness will be determined by the processor's native endianness.
I forgot to mention these conversion procedures which specifically implement the IEEE 754 encoding of 32 and 64 bit floating point numbers. They convert inexact reals to and from an exact integer which is its 32 or 64 bit representation.
(##flonum.->ieee754-32 x) ;; convert flonum x to 32 bit representation (##flonum.->ieee754-64 x) ;; convert flonum x to 64 bit representation (##flonum.<-ieee754-32 n) ;; convert 32 bit representation n to flonum (##flonum.<-ieee754-64 n) ;; convert 64 bit representation n to flonum
Here are a few examples:
(##flonum.->ieee754-64 3.14159)
4614256650576692846
(##flonum.<-ieee754-64 4614256650576692846)
3.14159
(number->string (##flonum.->ieee754-32 1.0) 16)
"3f800000"
(number->string (##flonum.->ieee754-64 1.0) 16)
"3ff0000000000000"
(number->string (##flonum.->ieee754-64 -1.0) 16)
"bff0000000000000"
(number->string (##flonum.->ieee754-64 2.0) 16)
"4000000000000000"
(number->string (##flonum.->ieee754-64 (+ 1.0 (expt 0.5 52))) 16)
"3ff0000000000001"
(number->string (##flonum.->ieee754-64 +inf.0) 16)
"7ff0000000000000"
Marc