Hi!
This might not be the right place for this, but I've failed to solicit useful (or any) opinions elsewhere and I imagine some people here would be interested in it.
I drafted a post called "A Critique on the Language Scheme". The draft is here:
https://gist.github.com/4430394
I hope this provokes thought.
Thanks, -Jason
Afficher les réponses par date
On Fri, Jan 04, 2013 at 09:34:31PM -0500, Jason Felice wrote:
Hi!
This might not be the right place for this, but I've failed to solicit useful (or any) opinions elsewhere and I imagine some people here would be interested in it.
I drafted a post called "A Critique on the Language Scheme". The draft is here:
car and cdr are *not* leftovers from the PDP-11. They're a lot older than that. They stand for 'construct address reference' and 'construct decrement reference'. 'address' and 'decrement were two fields of the 36-bit word of the IBM 7090 series of machines, dating back to the early 60's -- the machines on which Lisp 1.5 was implemented.
-- hendrik
Hi Jason,
What you wrote reflects that you thought this through with care.
You wanted an opinion so here you go:
* Identifiers made case-sensitive.
*Rationale:* This makes integration with other langauges easier. Since the usual way to write Scheme is all lowercase, this should not be an issue. Many implementations already deviate from the standard in this regard.
At least all the Scheme implementations I checked have case-sensitive identifiers?
- Replacement of the macro system
This entails the removal of define-syntax, syntax-rules, let-syntax, letrec-syntax. They are replaced with define-macro and let-macro. *Rationale:* R5RS's syntax-rules defines a sub-language used for hygenichttp://en.wikipedia.org/wiki/Hygienic_macro and [referentially transparent] macro transformation. The replacement removes the sub-language and replaces it with macro expanders which are written in plain Scheme code. While this puts the hygenic and referentially transparent properties on the programmer, it reduces the amount of knowledge the programmer needs and gives her more power and flexibility. The system will provide expand-macro in order to allow inspection of the macro-expansion results.
While at its core Scheme is the making of a programming language out of lambda calculus, the macro facilities are more like a 'feature' there to provide a needed practical function, than because they'd be so scientifically or otherwise perfect they just needed to be in the language for that reason.
Define-macro and let-macro come with this enormous hygiene issue:
; [Your macro library:] (define-macro (give-me-the-first-element-please var) `(car ,var))
; [User's code:] (define car +) ; User's custom language! (give-me-the-first-element-please '(1 2 3))
=> error.
Thus you can see the limited purpose of define-macro and let-macro. Of course it can work perfectly in a code base that's like a closed system, such as in Gambit's sources, and, in any code base where programming is done with this possibility in consideration.
A define-macro/let-macro system with hygiene through |alias| is fundementally described at http://www.p-cos.net/documents/hygiene.pdf and in detail as regards Gambit in this post by Per the 20:th of March 2012, see here https://mercure.iro.umontreal.ca/pipermail/gambit-list/2012-March/005815.htm....
The syntactic-closures based macro systems (syntax-rules/syntax case including define-syntax, syntax-rules, let-syntax, letrec-syntax) bring incredible complexity and with that low debuggability through a very complex identifier concept based a kind of duality of the identifier symbol in itself and the syntactic environment in which it is used in a particular instance, and brings a complex API for handling macros with this, that by nature is not Schemy and not suited for debugging.
Per came with the suggestions above based on having spent approx 6 months fulltime on developing the Black Hole module system, which does hybrid define-macro and syntactic closures expansion.
There might be some caveat I didn't get, but, I'd guess you could actually make a macro expander that supports both an alias macro system as per above and a syntactic-closures for compatibility with code that uses it, possibly by splitting expansion into two expansion phases, thus isolating all the identifier-related complexity.
Removal of do
*Rationale:* Use of do rather than let for looping is not typical in Scheme, and the form is hard to read.
Never used it, the presence of the ~50 lines or so it probably occupies in the evaluator and compiler never bothered me.
car and cdr replaced with pattern matching and iterators The names car and cdr are vestigates of the PDP-11. They and their variations (such as caddr) are difficult to read and reason through. Pattern matching can be used to extract values from data structures easily and concisely, and so the language will instead offer pattern matching along the lines of this http://wiki.call-cc.org/man/3/Pattern%20matching . [..]
&
The standard procedures made generic [..]
The point with Scheme is to provide a language in which you can do anything [at-least-somewhat-in-the-direction-of-highlevelish]. The language is not intended to be complete. It's *your* role to make it complete for your purposes.
Partially, it is put this way, because there's always some variation or improvement you could do to a pattern matching, iterator or generics system, while with the limited scope the Scheme language works with it's easy enough to get to a minimum language around which consensus can grow quite spontaneously and widely and in a way that is lasting.
Very much secondarily, there might be some performance characteristic in how generics would combine with the type system, that make them unsuitable for being a feature included by default.
I admit pattern matching, iterators and generics would be really nice to have. It is not so difficult to implement it in a given environment such as Black Hole, at all.
I didn't need it so much as for it to be worth it to implement it at all though.
You can be the one to implement it!
I'm sure many would be happy to use it, if it's experienced as really neat.
values, let-values, let*-values and call-with-values removed.
values is the tool for bringing symmetry between procedure invocation and return, and thus it is a bringer of important symmetry. Since the language is fundamentally about symmetry, I cannot see any reason for removal. Of course an implementation can implement values any way it wants to, including just as an alias for lists.
Any language or system will always contain like a design balance/equilibrium that could be described as involving an aspect of compromise. This brings a limit in suitability for some interval of purposes.
Scheme provides extensive code abstraction features, but for instance, is not exactly as heavily optimized for contemporary CPU caching as C and C++ is through their much more extensive use of structures [of structures etc.] that are focused in completely sequential RAM address intervals. Partially therefore, you might be better off implementing video codecs in C.
So the task is always yours to choose appropriate tools for a given task, in consideration of their unique characteristics.
Please feel free to share any thoughts or reflections.
Regards, Mikael
2013/1/5 Jason Felice jason.m.felice@gmail.com
Hi!
This might not be the right place for this, but I've failed to solicit useful (or any) opinions elsewhere and I imagine some people here would be interested in it.
I drafted a post called "A Critique on the Language Scheme". The draft is here:
https://gist.github.com/4430394
I hope this provokes thought.
Thanks, -Jason
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Great post Mikael! So happy to read your thoughts.
On Sat, Jan 5, 2013 at 6:37 PM, Mikael mikael.rcv@gmail.com wrote:
The syntactic-closures based macro systems (syntax-rules/syntax case including define-syntax, syntax-rules, let-syntax, letrec-syntax) bring incredible complexity and with that low debuggability through a very complex identifier concept based a kind of duality of the identifier symbol in itself and the syntactic environment in which it is used in a particular instance, and brings a complex API for handling macros with this, that by nature is not Schemy and not suited for debugging.
Per came with the suggestions above based on having spent approx 6 months fulltime on developing the Black Hole module system, which does hybrid define-macro and syntactic closures expansion.
There might be some caveat I didn't get, but, I'd guess you could actually make a macro expander that supports both an alias macro system as per above and a syntactic-closures for compatibility with code that uses it, possibly by splitting expansion into two expansion phases, thus isolating all the identifier-related complexity.
While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain on syntactic-closures for years, while this Holiday I finally got spare time reading its implementation in riaxpander/chibi-scheme, I found the concept of it is quite straight forward. Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
I've cloned such a system in gambit, and keep digging on how to integrate hygiene with module system and gambit's compiling process.
Thanks Meng
I agree with Meng. I see syntax-rules as a DSL for hygienic macros. It is completely "schemey" in the same way libraries like Kanren for logic programming, or FrTime for reactive programming are. The only difference is that when using hygienic macros, your code becomes data as well, to be processed before it actually turns into code. Actually, when you take into account a couple of pitfalls regarding lexical scoping and shadowing and use a variety of techniques, including continuation-passing-style, writing syntax-rules macros are extremely powerful and similar to regular recursive scheme. Unhigienic macros are well-known timebombs that are waiting to explode as soon as client code does something the library didn't think of. They are useful for self-sufficient systems, as Mikael said.
Meng, I wasn't aware of this "riaxpander", it seems chicken also has it. Is your riaxpander implementation open source?
I would add these comments to the original post:
- CAR and CDR are shorter than FIRST and REST (why not HEAD-TAIL?), so besides the historical meaning, I prefer them for this reason. However, you can always define your own first and rest (the former is actually defined in SRFI-1). About making them generic, next point: - I think the procedure specialization for types (char=, *-lenght, etc...) is good as it favors performance. If you want the generic ones, it is straightforward to define. For instance, that's what the author of SRFI-47 does: replace array=? with an array-augmented version of R5RS equal?. While you can do your own specialization, you couldn't do it the other way around: given a generic procedure in R5RS, specialize it for your types. - Mikael's point about symmetry is absolutely beautiful. And indeed I defend the usefulness of values, which are of special interest in functional programming where you avoid side-effects. Also, the points about C/C++ (and assembly I may add) are completely true, but that's one of the reasons I find Gambit a particularly powerful system. - Generics are defined in several libraries, there are many implementations. - Promise and force are re-defined in R5RS terms in SRFI-45. I agree with this point, but I don't know the deeper reasons why they are included, because even opening the possibility to implementations-defined optimized representation of this primitives could have been done with an SRFI.
Thanks to everyone for your comments.
Best regards
On Sat, Jan 5, 2013 at 1:16 PM, Meng Zhang wsxiaoys.lh@gmail.com wrote:
Great post Mikael! So happy to read your thoughts.
On Sat, Jan 5, 2013 at 6:37 PM, Mikael mikael.rcv@gmail.com wrote:
The syntactic-closures based macro systems (syntax-rules/syntax case including define-syntax, syntax-rules, let-syntax, letrec-syntax) bring incredible complexity and with that low debuggability through a very complex identifier concept based a kind of duality of the identifier symbol in itself and the syntactic environment in which it is used in a particular instance, and brings a complex API for handling macros with this, that by nature is not Schemy and not suited for debugging.
Per came with the suggestions above based on having spent approx 6 months fulltime on developing the Black Hole module system, which does hybrid define-macro and syntactic closures expansion.
There might be some caveat I didn't get, but, I'd guess you could actually make a macro expander that supports both an alias macro system as per above and a syntactic-closures for compatibility with code that uses it, possibly by splitting expansion into two expansion phases, thus isolating all the identifier-related complexity.
While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain on syntactic-closures for years, while this Holiday I finally got spare time reading its implementation in riaxpander/chibi-scheme, I found the concept of it is quite straight forward. Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
I've cloned such a system in gambit, and keep digging on how to integrate hygiene with module system and gambit's compiling process.
Thanks Meng
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Thanks for the feedback. I appreciate it. I think I'd like to narrow my focus to the primary thing, which was the lack of generality of the built-in procedures, and generics.
Making these efficient to compile (especially with a module system) seems hard, but rewarding. I'm thinking about how to do this more. Personally, I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
So, I'm thinking about these things and will let them stew for... another 10 years? I don't know.
Thanks, -Jason
On Sun, Jan 6, 2013 at 5:38 AM, Álvaro Castro-Castilla < alvaro.castro.castilla@gmail.com> wrote:
I agree with Meng. I see syntax-rules as a DSL for hygienic macros. It is completely "schemey" in the same way libraries like Kanren for logic programming, or FrTime for reactive programming are. The only difference is that when using hygienic macros, your code becomes data as well, to be processed before it actually turns into code. Actually, when you take into account a couple of pitfalls regarding lexical scoping and shadowing and use a variety of techniques, including continuation-passing-style, writing syntax-rules macros are extremely powerful and similar to regular recursive scheme. Unhigienic macros are well-known timebombs that are waiting to explode as soon as client code does something the library didn't think of. They are useful for self-sufficient systems, as Mikael said.
Meng, I wasn't aware of this "riaxpander", it seems chicken also has it. Is your riaxpander implementation open source?
I would add these comments to the original post:
- CAR and CDR are shorter than FIRST and REST (why not HEAD-TAIL?), so
besides the historical meaning, I prefer them for this reason. However, you can always define your own first and rest (the former is actually defined in SRFI-1). About making them generic, next point:
- I think the procedure specialization for types (char=, *-lenght, etc...)
is good as it favors performance. If you want the generic ones, it is straightforward to define. For instance, that's what the author of SRFI-47 does: replace array=? with an array-augmented version of R5RS equal?. While you can do your own specialization, you couldn't do it the other way around: given a generic procedure in R5RS, specialize it for your types.
- Mikael's point about symmetry is absolutely beautiful. And indeed I
defend the usefulness of values, which are of special interest in functional programming where you avoid side-effects. Also, the points about C/C++ (and assembly I may add) are completely true, but that's one of the reasons I find Gambit a particularly powerful system.
- Generics are defined in several libraries, there are many
implementations.
- Promise and force are re-defined in R5RS terms in SRFI-45. I agree with
this point, but I don't know the deeper reasons why they are included, because even opening the possibility to implementations-defined optimized representation of this primitives could have been done with an SRFI.
Thanks to everyone for your comments.
Best regards
On Sat, Jan 5, 2013 at 1:16 PM, Meng Zhang wsxiaoys.lh@gmail.com wrote:
Great post Mikael! So happy to read your thoughts.
On Sat, Jan 5, 2013 at 6:37 PM, Mikael mikael.rcv@gmail.com wrote:
The syntactic-closures based macro systems (syntax-rules/syntax case including define-syntax, syntax-rules, let-syntax, letrec-syntax) bring incredible complexity and with that low debuggability through a very complex identifier concept based a kind of duality of the identifier symbol in itself and the syntactic environment in which it is used in a particular instance, and brings a complex API for handling macros with this, that by nature is not Schemy and not suited for debugging.
Per came with the suggestions above based on having spent approx 6 months fulltime on developing the Black Hole module system, which does hybrid define-macro and syntactic closures expansion.
There might be some caveat I didn't get, but, I'd guess you could actually make a macro expander that supports both an alias macro system as per above and a syntactic-closures for compatibility with code that uses it, possibly by splitting expansion into two expansion phases, thus isolating all the identifier-related complexity.
While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain on syntactic-closures for years, while this Holiday I finally got spare time reading its implementation in riaxpander/chibi-scheme, I found the concept of it is quite straight forward. Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
I've cloned such a system in gambit, and keep digging on how to integrate hygiene with module system and gambit's compiling process.
Thanks Meng
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Hi Jason,
(Just so I got you right, by generality of built-in procedures you do actually mean generics, as in, different actual routines are invoked based on type?)
I believe something like a generics - multiple dispatch system would be really handy.
Through algorithms for analysis through abstract interpretation like 2-CFA, you can probably pin down quite a lot of data types - maybe even 95% for typical-ish code - and thus what exact procedures to use at expansion time and thus also get a really good performance.
Prof. Matt Might at Utah works with abstract interpretation, perhaps he published an implementation of a Scheme-based abstract interpreter of Scheme.
I suppose implementing this atop standard Scheme makes good sense.
2013/1/7 Jason Felice jason.m.felice@gmail.com
Thanks for the feedback. I appreciate it. I think I'd like to narrow my focus to the primary thing, which was the lack of generality of the built-in procedures, and generics.
Making these efficient to compile (especially with a module system) seems hard, but rewarding. I'm thinking about how to do this more. Personally, I don't think efficiency in terms of constant factors should often win versus code which could be more general.
Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
Without having studied it too deep, I believe abstract interpretation can help you a lot with this. If you rely on that objects have a type slot to them, like Gambit's define-type:s have (it's ##vector-ref :able on slot 0 or 1), then you can always dig this out during runtime and handle it the correctly then.
So, I'm thinking about these things and will let them stew for... another 10 years? I don't know.
Thanks, -Jason
Please feel free to share reflections you're coming up with along the way.
Brgds, Mikael
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote: ... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
-- hendrik
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08-01-13 19:18, Hendrik Boom wrote:
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote: ... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
Bigloo is another older example of a Scheme with optional type annotations.
Marijn
Gambit also has optional typed annotations through JazzScheme: www.jazzscheme.org
On Wed, Jan 9, 2013 at 8:44 AM, Marijn hkBst@gentoo.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 08-01-13 19:18, Hendrik Boom wrote:
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote: ... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
Bigloo is another older example of a Scheme with optional type annotations.
Marijn
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/
iEYEARECAAYFAlDtdC8ACgkQp/VmCx0OL2zMIwCgwk5bBZPRWJjU3TKnJjYQJaJ4 XXYAnj6Qgd1y/omRItSSg4WN/yqTlgQM =ziI9 -----END PGP SIGNATURE----- _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
2013/1/5 Meng Zhang wsxiaoys.lh@gmail.com
While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain on syntactic-closures for years [..] I found the concept of it is quite straight forward.
2013/1/6 Álvaro Castro-Castilla alvaro.castro.castilla@gmail.com
I agree with Meng. I see syntax-rules as a DSL for hygienic macros. It is completely "schemey" in the same way libraries like Kanren for logic programming, or FrTime for reactive programming are. The only difference is that when using hygienic macros, your code becomes data as well, to be processed before it actually turns into code.
Meng and Alvaro, ah, note taken, to have a clearer take on them personally, I'd need some serious experience of using them, and until now I did not really dig into them that much personally.
It's really unfortunate, in a sense, that in a hybrid S.C. and define-macro environment, that any use in define-macro of symbol? or any kind of symbol inspection or comparison, easly becomes *Completely Messed*.
Here's a mild example:
(define-macro (debug-print . a) (for-each (lambda (e) (cond ((symbol? e) (print "'" (symbol->string e) " ")) ((string? e) (print e " ")) (else (write e) (print " ")))) a))
(debug-print 5 + 7 "\n")
What is printed in the place of "+" depends on the particular grace of the SC macros loaded and the expander in this moment and instance.
It could |write| any expander-internal structure really, with any amount of object dependencies.
Now this was a really basic example, if some more extensive one comes to your mind feel free to share.
Meng:
Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
Yes! Like, a complete reference of Scheme macro systems: How to use them, strengths and limitations, how to implement them on their own and in hybrid forms, and what the challenges are, including illustrative examples.
This would do well as a downloadable PDF book.
Alvaro:
Actually, when you take into account a couple of pitfalls regarding lexical scoping and shadowing and use a variety of techniques, including continuation-passing-style, writing syntax-rules macros are extremely powerful and similar to regular recursive scheme.
Can you give any code examples of use of this variety of techniques for writing extremely powerful syntax-rules macros?
2013/1/8 Hendrik Boom hendrik@topoi.pooq.com
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote: ... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value
choice;
however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
Yes, type annotations that catch bugs already at expand time would be of value.
The abstract interpreter might not catch all at compile time of course, but, really enough. I'd be curious to know what kind of performance such a solution would have .. I wonder what Might published recently.
I tried typing this code several times into Gambit on my Samsung phone. The first several times it simply defaulted to the phones default screen and each time when I invoked Gambit it would act as it should. This last time it we to the normal Gambit screen, but all that's showing is the header line with a white screen. I believe the last time it opted to send a report, which I did. Advanced Task Killer does not solve the problem.
Restarting the phone does solve this problem.
________________________________ From: Mikael mikael.rcv@gmail.com To: Álvaro Castro-Castilla alvaro.castro.castilla@gmail.com; Meng Zhang wsxiaoys.lh@gmail.com; hendrik@topoi.pooq.com; Jason Felice jason.m.felice@gmail.com; Per Eckerdal per.eckerdal@gmail.com; gambit-list@iro.umontreal.ca Sent: Friday, January 11, 2013 9:25 AM Subject: Re: [gambit-list] Thoughts on Scheme
2013/1/5 Meng Zhang wsxiaoys.lh@gmail.com While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain
on syntactic-closures for years [..] I found the concept of it is quite straight forward.
2013/1/6 Álvaro Castro-Castilla alvaro.castro.castilla@gmail.com
I agree with Meng.
I see syntax-rules as a DSL for hygienic macros. It is completely "schemey" in the same way libraries like Kanren for logic programming, or FrTime for reactive programming are. The only difference is that when using hygienic macros, your code becomes data as well, to be processed before it actually turns into code.
Meng and Alvaro, ah, note taken, to have a clearer take on them personally, I'd need some serious experience of using them, and until now I did not really dig into them that much personally.
It's really unfortunate, in a sense, that in a hybrid S.C. and define-macro environment, that any use in define-macro of symbol? or any kind of symbol inspection or comparison, easly becomes *Completely Messed*.
Here's a mild example:
(define-macro (debug-print . a)
(for-each (lambda (e) (cond ((symbol? e) (print "'" (symbol->string e) " ")) ((string? e) (print e " ")) (else (write e) (print " ")))) a))
(debug-print 5 + 7 "\n")
What is printed in the place of "+" depends on the particular grace of the SC macros loaded and the expander in this moment and instance.
It could |write| any expander-internal structure really, with any amount of object dependencies.
Now this was a really basic example, if some more extensive one comes to your mind feel free to share.
Meng: Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
Yes! Like, a complete reference of Scheme macro systems: How to use them, strengths and limitations, how to implement them on their own and in hybrid forms, and what the challenges are, including illustrative examples.
This would do well as a downloadable PDF book.
Alvaro: Actually, when you take into account a couple of pitfalls regarding lexical scoping and shadowing and use a variety of techniques, including continuation-passing-style, writing syntax-rules macros are extremely powerful and similar to regular recursive scheme.
Can you give any code examples of use of this variety of techniques for writing extremely powerful syntax-rules macros?
2013/1/8 Hendrik Boom hendrik@topoi.pooq.com
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote:
... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value choice; however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
Yes, type annotations that catch bugs already at expand time would be of value.
The abstract interpreter might not catch all at compile time of course, but, really enough. I'd be curious to know what kind of performance such a solution would have .. I wonder what Might published recently.
_______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Hi Steve,
That's interesting. I would guess this has nothing to do with the macro facility or this code per se, but that this behavior was from some completely other source, like, the general functioning of the console driver or something. Thoughts?
M
2013/1/11 Steve Graham jsgrahamus@yahoo.com
I tried typing this code several times into Gambit on my Samsung phone. The first several times it simply defaulted to the phones default screen and each time when I invoked Gambit it would act as it should. This last time it we to the normal Gambit screen, but all that's showing is the header line with a white screen. I believe the last time it opted to send a report, which I did. Advanced Task Killer does not solve the problem.
Restarting the phone does solve this problem.
..
(define-macro (debug-print . a) (for-each (lambda (e) (cond ((symbol? e) (print "'" (symbol->string e) " ")) ((string? e) (print e " ")) (else (write e) (print " ")))) a))
(debug-print 5 + 7 "\n")
And it may just be the phone. I believe I had a different experience running it on my Kindle Fire. Have to try that again.
________________________________ From: Mikael mikael.rcv@gmail.com To: Steve Graham jsgrahamus@yahoo.com; gambit-list@iro.umontreal.ca Sent: Friday, January 11, 2013 2:54 PM Subject: Re: [gambit-list] Thoughts on Scheme
Hi Steve,
That's interesting. I would guess this has nothing to do with the macro facility or this code per se, but that this behavior was from some completely other source, like, the general functioning of the console driver or something. Thoughts?
M
2013/1/11 Steve Graham jsgrahamus@yahoo.com
I tried typing this code several times into Gambit on my Samsung phone. The first several times it simply defaulted to the phones default screen and each time when I invoked Gambit it would act as it should. This last time it we to the normal Gambit screen, but all that's showing is the header line with a white screen. I believe the last time it opted to send a report, which I did. Advanced Task Killer does not solve the problem.
Restarting the phone does solve this problem.
.. (define-macro (debug-print . a)
(for-each (lambda (e) (cond ((symbol? e) (print "'" (symbol->string e) " ")) ((string? e) (print e " ")) (else (write e) (print " ")))) a))
(debug-print 5 + 7 "\n")
I've been planning to write an extense document about syntax-rules, since there is a variety of techniques that aren't documented anywhere except in the mailing lists or usenet groups. Actually, you can even write unhygienic macros using syntax-rules, but that's a whole different story (and the paper describing the technique is 15 pages showing you how to break the system).
I'll try to write that document as soon as possible!
On Fri, Jan 11, 2013 at 5:25 PM, Mikael mikael.rcv@gmail.com wrote:
2013/1/5 Meng Zhang wsxiaoys.lh@gmail.com
While talking about the syntactic-closures, If we ignore syntax-case, I'll disagree that it brings "incredible" complexity. I've been uncertain on syntactic-closures for years [..] I found the concept of it is quite straight forward.
2013/1/6 Álvaro Castro-Castilla alvaro.castro.castilla@gmail.com
I agree with Meng. I see syntax-rules as a DSL for hygienic macros. It is completely "schemey" in the same way libraries like Kanren for logic programming, or FrTime for reactive programming are. The only difference is that when using hygienic macros, your code becomes data as well, to be processed before it actually turns into code.
Meng and Alvaro, ah, note taken, to have a clearer take on them personally, I'd need some serious experience of using them, and until now I did not really dig into them that much personally.
It's really unfortunate, in a sense, that in a hybrid S.C. and define-macro environment, that any use in define-macro of symbol? or any kind of symbol inspection or comparison, easly becomes *Completely Messed*.
Here's a mild example:
(define-macro (debug-print . a) (for-each (lambda (e) (cond ((symbol? e) (print "'" (symbol->string e) " ")) ((string? e) (print e " ")) (else (write e) (print " ")))) a))
(debug-print 5 + 7 "\n")
What is printed in the place of "+" depends on the particular grace of the SC macros loaded and the expander in this moment and instance.
It could |write| any expander-internal structure really, with any amount of object dependencies.
Now this was a really basic example, if some more extensive one comes to your mind feel free to share.
Meng:
Though I deadly missed an easy-to-understand, concrete implementation with document for it during the learning process.
Yes! Like, a complete reference of Scheme macro systems: How to use them, strengths and limitations, how to implement them on their own and in hybrid forms, and what the challenges are, including illustrative examples.
This would do well as a downloadable PDF book.
Alvaro:
Actually, when you take into account a couple of pitfalls regarding lexical scoping and shadowing and use a variety of techniques, including continuation-passing-style, writing syntax-rules macros are extremely powerful and similar to regular recursive scheme.
Can you give any code examples of use of this variety of techniques for writing extremely powerful syntax-rules macros?
2013/1/8 Hendrik Boom hendrik@topoi.pooq.com
On Mon, Jan 07, 2013 at 10:37:08AM -0500, Jason Felice wrote: ... ...
I don't think efficiency in terms of constant factors should often win versus code which could be more general. Clearly this is a value
choice;
however, I wonder how well a compiler can eliminate type dispatching without adding type annotation to the language.
For me, the value of type anotations is the possibility of static type checking, which catches bugs fast. That the compiler can then use the information to generate better code is a pleasant freebie.
It would be interesting to see if the stragegy in, say, typed Racket, could be usefully adapted to Gambit.
Yes, type annotations that catch bugs already at expand time would be of value.
The abstract interpreter might not catch all at compile time of course, but, really enough. I'd be curious to know what kind of performance such a solution would have .. I wonder what Might published recently.
This part sparked a thought with me later;
Obviously in any case the following is like a very low priority question as performance is good already:
Marc, the current GC and the future GC, does it iterate by depth or breadth?
If the further, indeed the order of objects will generally be extremely close in RAM after any GC copy operation, which would be beneficial from the CPU caching point of view, while with the latter they'd be apart.
(Obviously solid objects like strings and special type vectors are always 'close in memory' with themselves.)
Example:
(define a (vector b c)) (define d (vector e f))
(define g (vector a d))
If by depth the order in RAM after GC copy would be g a b c d e f, and if by breadth, g a d b c e f.
Regards, Mikael
2013/1/5 Mikael mikael.rcv@gmail.com ..
Scheme
[..]
is not exactly as heavily optimized for contemporary CPU caching as
[..] ..