[Snow-users-list] high-priority snow packages and package naming

Thomas Lord lord at emf.net
Sat Sep 15 20:17:46 EDT 2007


Julian Graham wrote:
>> Fine, but specify the input language -- the regexp language.
>> If you take my advice, it will have just *, |, [], and ()
>>     
>
> Sure, but most of the Scheme interpreters we're talking about already
> accept a broader regexp syntax (usually it's POSIX).  And I thought we
> were going for performance -- meaning that a pass-thru to the
> interpreter's regexp API (which, in turn, is often a pass-thru to a
> native implementation -- glibc, etc.) is the way to go.  


I'm trying to tell you something about the architecture of regexp
matchers and why it (arguably -- again, the BS caution) matters.

To get from "true regular expressions" to POSIX or Perl regexps
you write a back-tracking search engine that, in its "leaf node" evaluations
calls out to a true-regular expression engine.

My bet is that if that backtracking search part is coded in
portable Scheme atop nothing more than native "true regular
expressions" then:

1. Yes, there's a *slight* performance hit, but not unbearable.
2. There's a huge gain in utility.

Going the other way and encouraging people to depend on either
POSIX or Perl expressions is going to strand a lot of code under
a pretty hefty dependency on ad hoc, legacy APIs and implementations
of those APIs.   True regular expressions hit a more "timeless" note.-t







> Still, given
> that different interpreters accept different regexp "extensions," I
> agree that some normalization is required.  How to do it, though,
> without actually implementing much regexp logic in this package?
>
>   


Release early, release often.   Just have the core package do nothing
more than true regexps and hope that, later, intermediate things are
filled in.   I think you'll be surprised how just writing application code
directly with true regular expressions works out nicely.


>> It's ok if the Scheme binding has to translate from
>> a portable true regular expression syntax into whatever
>> the system uses natively (e.g., posix, perl, whatever).
>>     
>
> Right -- I'd think it would even be desirable.
>
>   


Yes.  


>> There should be no such thing as a "match object".
>> If you want things like sub-exp positions, I'm saying
>> don't use the posix re features for that or perl's ---
>> write that stuff in Scheme, using the true regular
>> expression matcher as the "inner loop".
>>     
>
> Fair enough -- I'd just like to avoid situations in which there's no
> way to prevent the Scheme interpreter from doing a lot of work that
> we're just going to discard.  E.g., I can't think of a way (besides
> memoization) to implement your (find-start ...) function on top of,
> say, Guile's regexp implementation (which is a pass-thru to glibc's
> native implementation) that doesn't involve the overhead of doing a
> complete match just to obtain the position of the first submatch.  


Hehe.   Some tricks:

Caution: I'm not *intimately* familiar with glibc's current internals so 
details
will vary but, these are the right tricks to think about anyway.

Notice that the POSIX regexec function takes an argument for the
output of match positions and subexpression match positions -- an array
of "match data".   And also notice that you pass an "nmatch" parameter
that says how many of those subexpressions you want to know about
(how big your array is).

The trick is that good matchers (I think glibc is one) are likely to be
lazy in computing those match positions -- it won't but for the ones you
ask for.   You should get best-practical performance by asking just for
the extent of the overall match.

Thus, you get a pretty good FIND-MATCH if you only ask for the
overall extent of the match and, while it isn't quite optimal, you get
a reasonable portable definition of FIND-START by running FIND-MATCH
and discarding the second return value.

Ambitious implementors are unlikely to have profound difficulty optimizing
FIND-START, if they choose to, even if they are mostly just using glibc
or Rx or whatever.







> So,
> yeah, I agree that match structures are kind of bullshit, but the
> majority (maybe all) of the Scheme interpreters we're dealing with
> here produce them -- I think it's slightly less bullshit when they
> present a match as an S-expr of, say, (([start] . [end]) ([start] .
> [end]) ...).  Given that the shitty, opaque match structures can be
> translated into these somewhat more useful S-exprs, well... you know,
> is that a palatable alternative?
>   

Don't try to maximize inclusion of every little "feature" in all the 
implementations.
You just need a good basis set and then implementors can catch up by
optimizing that basis set later.




>   
>> That's all you need to duplicate (and surpass) the functionality
>> of full Posix regexps and Perl regexps using portable Scheme
>> code.   And, those are all easy to do on top of either a Perl
>> or Posix engine.
>>     
>
> Easy, sure, but how efficient is it?
>
>   


THAT is exactly the high risk question here.    I have a very, very
strong hunch that it's fast enough.     This is based on my experience
implementing this stuff in C in contexts where I was doing things like
counting instructions and adding up cycles during optimization.
I *think* it'll be fast enough.   That's all I've got for ya.   (We could
get deeper into the tech internals of regexps but... I got a day job to
keep up with :-)

-t



More information about the Snow-users-list mailing list