A new beta of Gambit-C 4.0 is now available in source form at this address:
http://www.iro.umontreal.ca/~feeley/gambc40b13.tar.gz
Here's what's new:
- The "table" type. Tables map keys to values. Gambit has an efficient implementation of tables using hashing and open addressing. The runtime system and compiler now use tables internally instead of association lists in critical places. This improves the speed of the compiler when compiling large files containing many constants, or when the -debug option is used (on some tests the compiler runs 5 times faster). Tables can hold their keys weakly and/or their values weakly. The key comparison procedure and hashing procedure can be specified when a table is created. A reasonably efficient hashing procedure is used by default when the key comparison procedure is eq?, eqv?, equal?, string=? or string-ci=?.
- Hashing. There are several hashing procedures (symbol-hash, keyword-hash, string=?-hash, string-ci=?-hash, eq?-hash, eqv?-hash, and equal?-hash). The procedures object->serial-number and serial-number->object are more efficient (thanks to eq? tables).
- A new implementation of syntax-case is now included. This version preserves source-code location information, so it makes debugging easier than the previous version. It is usable in the interpreter and in the compiler. The integration with Gambit is not perfect: local variables are renamed, some special forms are transformed (e.g. when pretty-printing a procedure the code may look very different from what the programmer wrote), and some Gambit-specific special forms (such as ##namespace, ##declare, etc) are not available when using syntax-case. For this reason syntax-case is not enabled by default. To use it you must start Gambit like this:
% gsi ~~/syntax-case -
or load it from your customization file.
- The web-server example has been extended to demonstrate how web-continuations can be used. There is also a fairly complete library for dynamically generating HTML. It shows how the ##namespace and ##include forms can be used to modularize code.
- Most of the bugs and misfeatures reported on gambit-list have been fixed. Other bugs fixed: equal? on structures, passing Scheme functions to C, imprecise error messages, Mac OS X assembly code problem when using --enable-debug, and more. The system builds cleanly and has been tested in 32 bit and 64 bit environments (mainly Linux and Mac OS X).
- The source code for GUIDE (Gambit Universal IDE) is now included in the distribution. Unfortunately, there were some problems getting the makefiles and configure script working properly with Qt, so currently GUIDE is disabled. I expect this to be fixed shortly. [If you feel adventurous the sources are in lib/guide.]
- Gambit is now dual licensed. You have the option to choose between the Apache license and the LGPL.
Marc
Afficher les réponses par date
Whoot!!!
On 5/11/05, Marc Feeley feeley@iro.umontreal.ca wrote:
A new beta of Gambit-C 4.0 is now available in source form at this address:
http://www.iro.umontreal.ca/~feeley/gambc40b13.tar.gz
Here's what's new:
The "table" type. Tables map keys to values. Gambit has an efficient implementation of tables using hashing and open addressing. The runtime system and compiler now use tables internally instead of association lists in critical places. This improves the speed of the compiler when compiling large files containing many constants, or when the -debug option is used (on some tests the compiler runs 5 times faster). Tables can hold their keys weakly and/or their values weakly. The key comparison procedure and hashing procedure can be specified when a table is created. A reasonably efficient hashing procedure is used by default when the key comparison procedure is eq?, eqv?, equal?, string=? or string-ci=?.
Hashing. There are several hashing procedures (symbol-hash, keyword-hash, string=?-hash, string-ci=?-hash, eq?-hash, eqv?-hash, and equal?-hash). The procedures object->serial-number and serial-number->object are more efficient (thanks to eq? tables).
A new implementation of syntax-case is now included. This version preserves source-code location information, so it makes debugging easier than the previous version. It is usable in the interpreter and in the compiler. The integration with Gambit is not perfect: local variables are renamed, some special forms are transformed (e.g. when pretty-printing a procedure the code may look very different from what the programmer wrote), and some Gambit-specific special forms (such as ##namespace, ##declare, etc) are not available when using syntax-case. For this reason syntax-case is not enabled by default. To use it you must start Gambit like this:
% gsi ~~/syntax-case -
or load it from your customization file.
The web-server example has been extended to demonstrate how web-continuations can be used. There is also a fairly complete library for dynamically generating HTML. It shows how the ##namespace and ##include forms can be used to modularize code.
Most of the bugs and misfeatures reported on gambit-list have been fixed. Other bugs fixed: equal? on structures, passing Scheme functions to C, imprecise error messages, Mac OS X assembly code problem when using --enable-debug, and more. The system builds cleanly and has been tested in 32 bit and 64 bit environments (mainly Linux and Mac OS X).
The source code for GUIDE (Gambit Universal IDE) is now included in the distribution. Unfortunately, there were some problems getting the makefiles and configure script working properly with Qt, so currently GUIDE is disabled. I expect this to be fixed shortly. [If you feel adventurous the sources are in lib/guide.]
Gambit is now dual licensed. You have the option to choose between the Apache license and the LGPL.
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca http://mailman.iro.umontreal.ca/mailman/listinfo/gambit-list
Marc,
Really nice! Congratulations!
- The web-server example has been extended to demonstrate how web-continuations can be used. There is also a fairly complete library for dynamically generating HTML. It shows how the ##namespace and ##include forms can be used to modularize code.
Maybe it's my setup (I compiled Gambit-C under cygwin on a 2.2GHz Celeron running Windows XP with 764Mb RAM), but the web-continuations example is horribly slow. Each request takes many seconds to complete. Is this normal?
Dominique
I got the same result under ubuntu linux running dual xeon pIII 500s. It was horribly slow using interpreted scheme. However, when the scheme source was compiled it sped up significantly. It went from about 30 seconds per request to < 1 second per request. I was a bit surprised by that behavior.
My guess is that this problem doesn't have much to do with your gambit installation or with the speed of compiled vs interpreted code. The example passes the continuation around via a query parameter, not by a hidden form field (though the continuation is included there too). This makes for one honking long url. The example works fairly well for me with linux/firefox regardless of whether or not it is compiled or interpreted (although there is some funkiness with the browser location bar). It also ran ok with windows/firefox. The version of IE 6 I tested couldn't (or wouldn't) handle continuation-sized urls at all.
I bet that the example would run very well if the continuation was passed around using POST instead of GET. Probably Marc was just too busy implementing new features to bother with decoding form data.
Ben
On Thu, May 12, 2005 at 11:10:15AM -0400, Dominique Boucher wrote:
Marc,
Really nice! Congratulations!
- The web-server example has been extended to demonstrate how web-continuations can be used. There is also a fairly complete library for dynamically generating HTML. It shows how the ##namespace and ##include forms can be used to modularize code.
Maybe it's my setup (I compiled Gambit-C under cygwin on a 2.2GHz Celeron running Windows XP with 764Mb RAM), but the web-continuations example is horribly slow. Each request takes many seconds to complete. Is this normal?
Dominique
Dominique Boucher, Ph.D. The Scheme Way Project http://schemeway.sourceforge.net
Gambit-list mailing list Gambit-list@iro.umontreal.ca http://mailman.iro.umontreal.ca/mailman/listinfo/gambit-list
On Thu, 12 May 2005 ben@fuhok.net wrote:
My guess is that this problem doesn't have much to do with your gambit installation or with the speed of compiled vs interpreted code.
I don't think that's correct. Gambit's serialization of continuation is slow, at least in part due to linear lookups being done on all the serialized data when constructing the external representation. Because of the closure representation used in the interpreter a lot of data gets included. The new table facility might be used to help implement this in a more efficient way.
The example passes the continuation around via a query parameter, not by a hidden form field (though the continuation is included there too).
Note that it doesn't matter if you're using GET whether you pass the continuation in a hidden form field or by tacking it to the URL.
I bet that the example would run very well if the continuation was passed around using POST instead of GET.
It would still be slow. I use continuation serialization to implement process migration in my system and even a "small continuation" will take around a second or two to serialize. When I was including libraries like SSAX or htmlprag, the system was hanging for minutes. At first I thought my program was stuck in an infinite loop, but that's because I was underestimating infinity :)
Guillaume
On 12-May-05, at 6:41 PM, Guillaume Germain wrote:
On Thu, 12 May 2005 ben@fuhok.net wrote:
My guess is that this problem doesn't have much to do with your gambit installation or with the speed of compiled vs interpreted code.
I don't think that's correct. Gambit's serialization of continuation is slow, at least in part due to linear lookups being done on all the serialized data when constructing the external representation. Because of the closure representation used in the interpreter a lot of data gets included. The new table facility might be used to help implement this in a more efficient way.
Indeed using tables to implement serialization would speed things up. Currently association lists are used, and this leads to a quadratic time complexity. I had started changing the implementation to use tables but ran into some problems related to pretty-printing, so it will have to wait till the next release.
Marc
On Thu, May 12, 2005 at 06:41:19PM -0400, Guillaume Germain wrote:
On Thu, 12 May 2005 ben@fuhok.net wrote: I don't think that's correct. Gambit's serialization of continuation is slow, at least in part due to linear lookups being done on all the serialized data when constructing the external representation. Because of the closure representation used in the interpreter a lot of data gets included. The new table facility might be used to help implement this in a more efficient way.
That sounds reasonable. My experience with continuation serialization consists of running the example from the manual.
The example passes the continuation around via a query parameter, not by a hidden form field (though the continuation is included there too).
Note that it doesn't matter if you're using GET whether you pass the continuation in a hidden form field or by tacking it to the URL.
I bet that the example would run very well if the continuation was passed around using POST instead of GET.
Though I think I was way off on the cause of this problem, it does matters a little whether you GET or POST, since there can be a limit to amount of data you can pass with GET. I couldn't get the example to work with IE, maybe for that reason.
It would still be slow. I use continuation serialization to implement process migration in my system and even a "small continuation" will take around a second or two to serialize. When I was including libraries like SSAX or htmlprag, the system was hanging for minutes. At first I thought my program was stuck in an infinite loop, but that's because I was underestimating infinity :)
When I run the example, it is indeed a little slower when interpreted, but only by 1-2 seconds. I'm not experiencing 30+ second delays for whatever reason.
Thanks for clearing that up for me,
Ben
When I run the example, it is indeed a little slower when interpreted, but only by 1-2 seconds. I'm not experiencing 30+ second delays for whatever reason.
1 or 2 seconds for a web server is a HUGE delay. Suppose that you try to serve hundreds of pages simultaneously... In my application domain (speech-enabled applications), such delays are unacceptable.
I know Marc is a not a big proponent of the other web-continuation approach (the one taken by PLT Scheme, for example) that keeps the continuation on the server. But for some domains like mine, there is no need to worry with the back-button problem. So the other approach is viable and would not incur this serialization penalty.
Just my 2 cents.
On 5/13/05, Dominique Boucher schemeway@sympatico.ca wrote:
1 or 2 seconds for a web server is a HUGE delay. Suppose that you try to serve hundreds of pages simultaneously... In my application domain (speech-enabled applications), such delays are unacceptable.
Its pretty unacceptable in any web app. Also 450k per page is way to much bandwidth usage. However, Marc never said this was a production quality web server. Neither does he claim to be an expert in web systems. The only thing he ever said is that this would be a good place to start, and it is.
I know Marc is a not a big proponent of the other web-continuation approach (the one taken by PLT Scheme, for example) that keeps the continuation on the server. But for some domains like mine, there is no need to worry with the back-button problem. So the other approach is viable and would not incur this serialization penalty.
Even if you do need to worry about the back button its pretty trivial to fix. Just implement sessions, store the continuation in the session and write out a key you have attached to that continuation in a form somewhere. Initialy you can store the session ids in a cookie, later on you can implement more advanced things like URL rewriting and what not. All of these problems where solved quite some time ago, its just a matter of applying those lessons to this system.
As for storing system data on the client, that's never a good idea for reasons I hope are obvious.
Eric Merritt wrote:
As for storing system data on the client, that's never a good idea for reasons I hope are obvious.
Agree about system data but it does bring up the interesting arguments about REST-ful (read highly-scalable) web applications where state maintenance is pushed onto the client vs. server sessions. I realize this is a bit off-topic but I think that continuations (suitiably sped-up and obfuscated, perhaps?) on the client side could be a reasonable design for future applications.
On Fri, 13 May 2005, Bruce Butterfield wrote:
Eric Merritt wrote:
As for storing system data on the client, that's never a good idea for reasons I hope are obvious.
Agree about system data but it does bring up the interesting arguments about REST-ful (read highly-scalable) web applications where state maintenance is pushed onto the client vs. server sessions. I realize this is a bit off-topic but I think that continuations (suitiably sped-up and obfuscated, perhaps?) on the client side could be a reasonable design for future applications.
I am of that advice. With a compact enough (signed or encrypted) object representation it might become practical and useful to store the continuation on the client's side.
This solves the problem of when to time out sessions on the server's side. Also, the data isn't stored in a centralized place on the servers, which could help to do load balancing.
I don't think the problem is solved just yet.
Guillaume
PS- In the context of this discussion, I suggest having a look at Anton Van Straaten's LL4 slides:
"Continuations continued: The REST of the computation" http://ll4.csail.mit.edu/slides/rest-slides.pdf
On 5/13/05, Bruce Butterfield bab@entricom.com wrote:
Eric Merritt wrote:
As for storing system data on the client, that's never a good idea for reasons I hope are obvious.
Agree about system data but it does bring up the interesting arguments about REST-ful (read highly-scalable) web applications where state maintenance is pushed onto the client vs. server sessions. I realize this is a bit off-topic but I think that continuations (suitiably sped-up and obfuscated, perhaps?) on the client side could be a reasonable design for future applications.
Trusting data (much less code) to the client is just not safe. Theoretically you could encrypt the data and trust the returned data, but there are any number of places where that particular path could (and will if the target is attractive enough) will fail. By storing data on the server and only associating it with a client you remove a whole class of possible security issues. I don't believe that the benefits of storing state client side outweigh the steps that would need to be taken to ensure security.
On Fri, 13 May 2005, Eric Merritt wrote:
Trusting data (much less code) to the client is just not safe. Theoretically you could encrypt the data and trust the returned data, but there are any number of places where that particular path could (and will if the target is attractive enough) will fail.
Could you give an example of a situation where you can't ensure the integrity of data you have encrypted yourself? Or rather, where do you see the places where "that path" will fail? I might be missing something obvious.
Guillaume
On 5/13/05, Guillaume Germain germaing@iro.umontreal.ca wrote:
On Fri, 13 May 2005, Eric Merritt wrote:
Trusting data (much less code) to the client is just not safe. Theoretically you could encrypt the data and trust the returned data, but there are any number of places where that particular path could (and will if the target is attractive enough) will fail.
Could you give an example of a situation where you can't ensure the integrity of data you have encrypted yourself? Or rather, where do you see the places where "that path" will fail? I might be missing something obvious.
No encryption is perfect, lets take the most strait forward attack where the encryption scheme is broken and the attacker modifies the continuation in some arbitrary manner. You can bet that the protections you put in place will be circumvented, its just a matter of how, when and how much damage will be done.
This is a very common problem in client/server games. Game designers very often store quite a bit of player data on the client in the interest of efficiency. They implement some type of protection scheme to keep the user from modifying this game information directly. In almost every case, users quickly find a method around these protections and manipulate that data. This is a common cheat for those kind of games. The developers then go one of two ways. They move the state data to the server and remove the problem (the of course, then need to work out the performance issues) or they start a war of attrition with the cheaters. What I mean by this is that they change the protection scheme, increase it strength, change formats, etc. The cheaters, of course, quickly find a way to break the new scheme and the cycle continues.
Of course, this assumes if you have a sufficiently attractive target to warrant the effort.
Guillaume
Eric Merritt wrote:
Trusting data (much less code) to the client is just not safe. Theoretically you could encrypt the data and trust the returned data, but there are any number of places where that particular path could (and will if the target is attractive enough) will fail. By storing data on the server and only associating it with a client you remove a whole class of possible security issues. I don't believe that the benefits of storing state client side outweigh the steps that would need to be taken to ensure security.
Well, there's data and then there's data. What if the continuation could not be unserialized into anything useful for attacking the server - sandboxes are useful for this. I'm not trying to minimize the security issue here but there are some real advantages moving closer to a peer-to-peer model vs. dumb client/smart server.
On 5/13/05, Bruce Butterfield bab@entricom.com wrote:
Eric Merritt wrote:
Well, there's data and then there's data. What if the continuation could not be unserialized into anything useful for attacking the server - sandboxes are useful for this.
If you could do this it would seriously lessen your exposure to risk.
I'm not trying to minimize the security issue here but there are some real advantages moving closer to a peer-to-peer model vs. dumb client/smart server.
There are advantages and disadvantages to almost everything, its just a matter of weighting the advantages against the disadvantages on a case by case basis. I believe that in the vast majority of cases the disadvantages of this approach outweigh the advantages. However, that doesn't mean it will be true in every case.
It would probably compress it quite a bit. Hell, you could probably get even better compression if you exploited the specific knowledge about gambit continuations.
On 5/13/05, Bradley Lucier lucier@math.purdue.edu wrote:
On May 13, 2005, at 2:59 PM, Eric Merritt wrote:
450k per page is way to much bandwidth usage.
I agree, of course, but I'm wondering how much this would compress, using, e.g., zlib.
Brad
Marc:
Your packaging of my html code generator is really cool, but perhaps you started with an older version. I'm including below the version I use now, which tries to do some automatic indentation and pretty-printing of html code to make it human readable. I've also included the source and output of one of the pages of my Trillia Group web site that uses some of these features (including "unprotected" html output and (non-)formatting of <pre> contents). I think a few bug fixes are included, too.
Brad