Marc:
I'd like to have an implementation of SRFI 14 (character sets) that works with the entire unicode character set rather than just the 256-character Latin1 subset
I think one needs a data structure that allows compact encoding of relatively large (about $2^{21}$ elements) sets of positive integers that have lots of contiguous subsets.
At any rate, I thought you may have solved this problem in gambit/lib/gambit/char/char#.scm, but for the life of me I can't figure out what you're doing there.
Can your data sets for encoding Unicode properties be used for an efficient SRFI 14 implementation?
Brad
Afficher les réponses par date
This is the design I like: a u32 vector such that each even element is the starting code point of an interval and each odd element is the ending code point + 1 of an interval, where the intervals are in order. A binary search determines whether an element is in or out of the set. Intersection is merge, and complement is "prepend #x00000 to the set and remove the last element, unless the set already begins with #x00000, in which case remove it and append #x200000".
See https://github.com/scheme-requests-for-implementation/srfi-14/blob/master/co... for how to make standard character set definitions that conform to Unicode rather than Java 1.0.
On Wed, May 17, 2023 at 1:34 PM Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I'd like to have an implementation of SRFI 14 (character sets) that works with the entire unicode character set rather than just the 256-character Latin1 subset
I think one needs a data structure that allows compact encoding of relatively large (about $2^{21}$ elements) sets of positive integers that have lots of contiguous subsets.
At any rate, I thought you may have solved this problem in gambit/lib/gambit/char/char#.scm, but for the life of me I can't figure out what you're doing there.
Can your data sets for encoding Unicode properties be used for an efficient SRFI 14 implementation?
Brad
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://mailman.iro.umontreal.ca/cgi-bin/mailman/listinfo/gambit-list
On 5/17/23 5:33 PM, John Cowan wrote:
This is the design I like: a u32 vector such that each even element is the starting code point of an interval and each odd element is the ending code point + 1 of an interval, where the intervals are in order. A binary search determines whether an element is in or out of the set. Intersection is merge, and complement is "prepend #x00000 to the set and remove the last element, unless the set already begins with #x00000, in which case remove it and append #x200000".
See https://github.com/scheme-requests-for-implementation/srfi-14/blob/master/co... > for how to make standard character set definitions that conform to
Unicode rather than Java 1.0
Thanks, I'll look at.
Unicode has stretches of code-points that are invalid, doesn't it? In which case complement would always be complement against the full valid Unicode set (which isn't a single contiguous range).
Brad
The Gambit RTS implements the Unicode tables with a few homogeneous vectors that encode the properties of the characters. The design is optimized for speed, i.e. the execution time of predicates like (char-lower-case? char), and character conversions like (char-downcase char), and string conversions like (string-downcase string).
Most of the information is contained in the ##unicode-class u8vector which has 1 byte for the relevant Unicode characters (the first 205744 characters). This u8vector is indexed directly with the character code (not a binary search). The byte encodes the “character class” like this:
;; code = 0-9 => digit, with a code that is the digit value ;; code = 10 => no interesting class ;; code = 11 => whitespace ;; code = 12 => other class ;; code = 13-94 => upper case class ;; code = 95-97 => title case class ;; code = 98-199 => lower case class
These ranges are computed automatically by reading the Unicode properties files (see the macro-define-unicode-tables macro in lib/gambit/char/char#.scm which does this processing at macro expansion time).
Using a directly indexed u8vector makes it fast to check the basic properties, such as (char-lower-case? char), (char-numeric? char), and (char-whitespace? char).
Other vectors encode the information required to upcase and downcase characters using a “distance” information. For example (char-downcase char) is essentially implemented like this:
(define (char-downcase char) (let* ((c (char->integer char)) (d (+ c (quotient (s32vector-ref ##unicode-downcase-dist (u8vector-ref ##unicode-class c)) 2)))) (integer->char d)))
(char-downcase #\X) => #\x (char-downcase #\a) => #\a
The lower bit of the “distance” information is usually 0. It is 1 when there is not a mapping from one character to one character, for example (string-upcase "ß") => "SS"
So for implementing SRFI 14 efficiently, I think the above encoding should be used when possible for the predefined character sets, i.e. char-set:lower-case, char-set:upper-case, char-set:letter, char-set:digit, etc. It will allow quickly testing membership. The encoding mentionned by John (which requires a binary search) is a good implementation for general character sets which typically have groups of contiguous characters in or out of the set. It would be wasteful in time and space to implement the predefined character sets with the general representation (for example each of char-set:lower-case, char-set:upper-case, char-set:letter occupy about 5 KB and require a binary search to test membership).
Marc
On May 17, 2023, at 1:32 PM, Bradley Lucier lucier@math.purdue.edu wrote:
Marc:
I'd like to have an implementation of SRFI 14 (character sets) that works with the entire unicode character set rather than just the 256-character Latin1 subset
I think one needs a data structure that allows compact encoding of relatively large (about $2^{21}$ elements) sets of positive integers that have lots of contiguous subsets.
At any rate, I thought you may have solved this problem in gambit/lib/gambit/char/char#.scm, but for the life of me I can't figure out what you're doing there.
Can your data sets for encoding Unicode properties be used for an efficient SRFI 14 implementation?
Brad
On 5/18/23 9:04 AM, Marc Feeley wrote:
So for implementing SRFI 14 efficiently, I think the above encoding should be used when possible for the predefined character sets, i.e. char-set:lower-case, char-set:upper-case, char-set:letter, char-set:digit, etc. It will allow quickly testing membership. The encoding mentionned by John (which requires a binary search) is a good implementation for general character sets which typically have groups of contiguous characters in or out of the set. It would be wasteful in time and space to implement the predefined character sets with the general representation (for example each of char-set:lower-case, char-set:upper-case, char-set:letter occupy about 5 KB and require a binary search to test membership).
Marc: Thanks for your comments.
I have studied SRFI 14 a bit more in light of your comments.
It appears to me that it would suffice to use John's representation for all character sets, but write
char-set-contains?
in terms of
char-alphabetic? char-lower-case? char-numeric? char-upper-case? char-whitespace?
for some of the standard character sets.
Maybe use a structure for character sets with a slot for a membership-testing procedure, which would be a predefined, specialized, fast procedure for many if not all the standard data sets:
char-set:lower-case Lower-case letters char-set:upper-case Upper-case letters char-set:title-case Title-case letters char-set:letter Letters char-set:digit Digits char-set:letter+digit Letters and digits char-set:graphic Printing characters except spaces char-set:printing Printing characters including spaces char-set:whitespace Whitespace characters char-set:iso-control The ISO control characters char-set:punctuation Punctuation characters char-set:symbol Symbol characters char-set:hex-digit A hexadecimal digit: 0-9, A-F, a-f char-set:blank Blank characters -- horizontal whitespace char-set:ascii All characters in the ASCII set. char-set:empty Empty set char-set:full All characters
but would use binary search for any user-defined character sets.
John gave the following Unicode character set definitions for the SRFI 14 standard character sets:
char-set:lower-case = property Lowercase char-set:upper-case = property Uppercase char-set:title-case = category Lt char-set:letter = property Alphabetic char-set:digit = category Nd char-set:letter+digit = property Alphabetic + category Nd char-set:graphic = category L* + category N* + category M* category S* + category P* char-set:printing = char-set:graphic + char-set:whitespace char-set:whitespace = property White_Space char-set:iso-control = 0000..001F + 007F..009F char-set:punctuation = category P* char-set:symbol = category S* char-set:hex-digit = 0030..0039 + 0041..0046 + 0061..0066 char-set:blank = category Zs + 0009 char-set:ascii = 0000..007F
Your process-data routine in char#.scm does not seem to keep enough of the property information to fill out the primitives of this table; the missing parts seem to be
char-set:punctuation = category P* char-set:symbol = category S* char-set:graphic = category L* + category N* + category M* category S* + category P* ;; M* and N*
Maybe a few more codes could be added for P*, S*, M* and N* above 199 in your encoding in char#.scm to have fast membership testing for all the "standard" classes.
Brad
PS: I still don't understand the table in char#.scm"
;; Encoding in unicode-class table: ;; ;; | UPPER | OTHER | LOWER | ;; |FLT: : : LF T=0 | LF U=0 :FLU: : | UT F=0 :FUT: : | ;; | F : L : T : LF T=0 | LF U=0 : F : L : U | UT F=0 : F : U : T | ;; ;; F | F : : : F | F : F : : | 0 : F : : | ;; L | : L : : L | L : : L : | 0 : 0 : : | ;; T | : : T : 0 | 0 : 0 : : | T : : : T | ;; U | 0 : : : 0 | 0 : : : U | U : : U : |
On Fri, May 19, 2023 at 6:27 PM Bradley Lucier lucier@math.purdue.edu wrote:
Unicode has stretches of code-points that are invalid, doesn't it? In
which case complement would always be complement against the full valid Unicode set (which isn't a single contiguous range).
The ranges of valid Unicode characters are #x0000 to #xD7FF and #xE000 to #x10FFFF. As far as char-set-contains? is concerned, it doesn't matter if the non-range #xD7FF to #xDFFF are included or excluded, as there is no way to create a character whose char->int value is in this range. However, when it comes to enumeration (the cursor functions, -fold, -for-each, -map, ->list, ->string) you do need to special-case this non-range to avoid traversing it. Alternatively, the special-casing can be done in the set operations, which is probably better because they are less used than the enumerators.
It appears to me that it would suffice to use John's representation for all character sets, but write
char-set-contains?
in terms of
char-alphabetic? char-lower-case? char-numeric? char-upper-case? char-whitespace?
for some of the standard character sets.
The difficulty for a portable implementation of SRFI 14 is that the above functions often cover only ASCII or only Latin-1 in the native implementation, or only cover a subset of the assigned characters corresponding to an older version of Unicode (after the long-obsolete version 1.1, the set of all assigned characters only grows with the version, it never shrinks). If you know that they are up-to-date, that's a reasonable approach, but if they are incomplete, it's better to do it the other way round: reimplement char-alphabetic? in terms of char-set-contains? and char-set-letter?, etc.
Note that IMO this statement in the definition of ucs-range->charset
- If the requested range includes unassigned UCS values, these are silently ignored (the current UCS specification has "holes" in the space of assigned codes).
should be disregarded, because it makes these functions unnecessarily dependent on a specific version of Unicode. However, attempts to specify the surrogate codes #\xD800 to #\xDFFF should indeed be excluded, as they cannot ever be assigned.
Maybe a few more codes could be added for P*, S*, M* and N* above 199 in your encoding in char#.scm to have fast membership testing for all the "standard" classes.'
That makes sense to me.
On May 20, 2023, at 3:34 AM, John Cowan cowan@ccil.org wrote:
Maybe a few more codes could be added for P*, S*, M* and N* above 199 in your encoding in char#.scm to have fast membership testing for all the "standard" classes.'
That makes sense to me.
Is it guaranteed by the Unicode standard that those properties are mutually exclusive with the other properties (letter, digit, etc)?
If so, that would be easy to add.
Concerning the general representation, I’m considering using a structure containing a “negation” flag and a reference to a vector (and other things like the size of the set). The vector has increasing values which are the Unicode points where there is a transition between “in the set” and “out of the set”. Because of the separate negation flag, the representation can assume that the Unicode point 0 is always in the set represented by the vector (in other words the first element of the vector will never be 0). This gives a constant time for set negation which I expect is a frequent operation. Moreover I’m thinking that the vector can be either a u32vector, a u16vector, or a u8vector depending on the size of the last value in the vector. When representing character sets over the latin-1 characters, probably a common situation, then a factor of 4 improvement in space is possible (although in this situation a plain bitmap might be a better idea for compactness and speed of membership testing). Benchmarking needed to sort this out!
Marc
On 5/21/23 9:05 AM, Marc Feeley wrote:
Is it guaranteed by the Unicode standard that those properties are mutually exclusive with the other properties (letter, digit, etc)?
I'm just learning about Unicode, but it appears that yes, the General Categories of characters are disjoint:
http://www.unicode.org/reports/tr44/#General_Category_Values
For future proofing, it would be good to encode each category on its own, in such a way that related categories are grouped together numerically. Perhaps the lowercase and uppercase letters, which need case-changing tables, should still be last in the encoding.
Then we can encode the SRFI 14 character sets in perhaps an imperfect encoding (although with fast membership tests from your Unicode table) and worry about optimizations later.
Brad
On Sun, May 21, 2023 at 5:39 PM Bradley Lucier lucier@math.purdue.edu wrote:
For future proofing, it would be good to encode each category on its own, in such a way that related categories are grouped together numerically. Perhaps the lowercase and uppercase letters, which need case-changing tables, should still be last in the encoding.
There's really nothing to future-proof: the number of general categories is fixed, although it's possible that some characters may change categories.