On 10-Sep-09, at 11:32 AM, Adrien Piérard wrote:
2009/9/11 Bradley Lucier lucier@math.purdue.edu:
Or home-grown extensible u8vectors? Or strings (after configuring with --enable-char-size=1)? How many distinct "characters" are being distinguished?
Warning: *Beware of the following question, for I should already be sleeping* Supposing I have an alphabet of 2 letters (to make things simple), I can code it with two bits. Despite the possible algorithmic and computational pain somehow, since we have bignums, how about encoding a string of this alphabet into a number, using bitwise operations? Appending a char to a string is a SHIFT of two bits, then an OR. Referencing should be a matter of LOG (to get the size), and then shifting and an AND 3.
I'm still awake enough to encode just 2 letters and not three on 2 bits (for concatenating "01" to "00" would not work as expected). I guess that it relies on the representation of bignums in memory.
What would be such a misuse of bignums worth?
You can use 1 bit per letter if there are 2 letters. You simply need a 1 bit at the top end to indicate the length of the bit string. Then you simply use integer-length (minus one) to get the length of the bit string:
(map integer-length '(1 2 3 4 5 6 7 8 9 10 11 12))
(1 2 2 3 3 3 3 4 4 4 4 4)
Note that your representation has the same asymptotic space efficiency as a u8vector where each byte contains 8 letters. I'm pretty sure an explicit u8vector representation would be faster (don't be fooled by "appending a letter is just a shift"... the shift is going to be O(n) not O(1) because these are bignums).
Marc
P.S. Get some sleep!