Hi all -
I am wondering if there are any efforts underway to port gambit to a llvm back end? Would this provide opportunity to eliminate a trampoline, at least in certain cases?
-A
Afficher les réponses par date
Andrew I. Schein wrote:
Hi all -
I am wondering if there are any efforts underway to port gambit to a llvm back end?
(So far none that I am aware of.)
Would this provide opportunity to eliminate a trampoline, at least in certain cases?
Have you measured whether the trampoline approach carries any cost (i.e. which is not being optimized away by gcc), and if so, how big it is, and how it could be reduced by better assembly?
(FWIW, by coincidence I've done some cross module call benchmarks recently which you can get from here: git clone http://scheme.mine.nu/gambit/experimental/crossmodulecalls/.git
I would think that cross module calls are more expensive than module-local single-host calls not because of the trampoline but because of copying of module state between continuation frames and back -- but I haven't investigated in detail.)
My guess is that about the only real benefit that an llvm backend could provide would be faster compilation times.
Christian.
On Sep 21, 2008, at 2:02 PM, Christian Jaeger wrote:
(FWIW, by coincidence I've done some cross module call benchmarks recently which you can get from here: git clone http://scheme.mine.nu/gambit/experimental/ crossmodulecalls/.git
Your site is flagged by Firefox as a "Reported Attack Site" (big red box and all that); there are some details at
http://safebrowsing.clients.google.com/safebrowsing/diagnostic? client=Firefox&hl=en-US&site=http://scheme.mine.nu/gambit/ experimental/crossmodulecalls/.git
Do you know why?
Brad
On Sep 21, 2008, at 4:18 PM, Bradley Lucier wrote:
On Sep 21, 2008, at 2:02 PM, Christian Jaeger wrote:
(FWIW, by coincidence I've done some cross module call benchmarks recently which you can get from here: git clone http://scheme.mine.nu/gambit/experimental/ crossmodulecalls/.git
Your site is flagged by Firefox as a "Reported Attack Site" (big red box and all that); there are some details at
http://safebrowsing.clients.google.com/safebrowsing/diagnostic? client=Firefox&hl=en-US&site=http://scheme.mine.nu/gambit/ experimental/crossmodulecalls/.git
Do you know why?
Ah, the answer on Slashdot ...
Bradley Lucier wrote:
On Sep 21, 2008, at 2:02 PM, Christian Jaeger wrote:
(FWIW, by coincidence I've done some cross module call benchmarks recently which you can get from here: git clone http://scheme.mine.nu/gambit/experimental/crossmodulecalls/.git
Your site is flagged by Firefox as a "Reported Attack Site" (big red box and all that); there are some details at
http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefo...
Do you know why?
That page is saying "Diagnostic page for mine.nu/". So it's about the whole mine.nu Domain, which is not being owned by me but by the www.dyndns.org free domain hosting service. You can get free subdomains of (amongst others) mine.nu there. So I guess there are a few black sheeps misusing other subdomains. For example, linuks.mine.nu is not served by me, but still gives the same red flag:
http://safebrowsing.clients.google.com/safebrowsing/diagnostic?client=Firefo...
So, d'oh, I should probably move the stuff to a domain I own.
Thanks for reporting.
Christian.
PS. I've now created a signed tag on the current commit of the repository. Run "git tag -v tag1" (after a "git pull" or a fresh clone). My PGP fingerprint is F033 D030 F75D E445 05A1 1865 4ECB DF80 1FE6 92DA
Hallo,
Christian Jaeger wrote:
My guess is that about the only real benefit that an llvm backend could provide would be faster compilation times.
There is also generation of machine code at runtime for several architectures.
Cheers, -alex http://www.ventonegro.org/
Alex Sandro Queiroz e Silva wrote:
There is also generation of machine code at runtime for several
architectures.
Well, to be picky, Gambit's C backend also allows generation of machine code at runtime (you have tried |compile-file| and |load|, haven't you?); I guess what you mean to say is that LLVM would allow to avoid writing machine code to a file before loading it into the process memory.
That alone wouldn't buy you anything really, right? Writing to a file isn't a performance bottleneck and you usually want to cache, and equally importantly, share between multiple processes, the generated code anyway.
Only if you were wanting to create new assembly code *very* quickly on demand, i.e. JVM hotspot alike technology, a write-to-file-and-map-it approach would become a hindrance (and you couldn't mutate the files if they are being in use by multiple processes concurrently without taking adequate precautions). But one reason for the JVM to need fast generation of assembly is because it doesn't cache it on disk.
Christian.
Hallo,
Christian Jaeger wrote:
That alone wouldn't buy you anything really, right? Writing to a file isn't a performance bottleneck and you usually want to cache, and equally importantly, share between multiple processes, the generated code anyway.
You would need every user to have a development toolchain (compiler, assembler, linker etc.) on every user machine.
Cheers, -alex http://www.ventonegro.org/
To me, faster compilation times alone would be a quite big win. Compiling takes rather much time when working on larger projects with Gambit.
/Per
On 22/09/2008, Alex Sandro Queiroz e Silva asandroq@gmail.com wrote:
Hallo,
Christian Jaeger wrote:
That alone wouldn't buy you anything really, right? Writing to a file isn't a performance bottleneck and you usually want to cache, and equally importantly, share between multiple processes, the generated code anyway.
You would need every user to have a development toolchain
(compiler, assembler, linker etc.) on every user machine.
Cheers, -alex http://www.ventonegro.org/ _______________________________________________ Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
Per Eckerdal wrote:
To me, faster compilation times alone would be a quite big win. Compiling takes rather much time when working on larger projects with Gambit.
I think a few questions are relevant to keep in mind:
- if just compilation speed is what one wants to solve, the aim might be attainable by simply using the upcoming llvm based "clang" C compiler; - the current C based backend isn't so bad for debugging because it allows one to use C tools like gprof and gdb; not sure how that will look with llvm; - there must be a reason you're not using the interpreter during development; for me it has usually been when I was developing interfaces to C code, but exactly in those cases you actually *need* C compilation (and I expect writing a backend which uses a C compiler to compile the C bits but directly generates code for the rest would be more difficult).
There is one drawback of the C backend that I'm currently wondering whether it could be solved by creating an LLVM backend: stepping in compiled code is currently not possible from the Gambit debugger (it's possible from gdb but currently not very practical because it is missing integration with the Scheme repl for pretty-printing and other functionality, and it is stopping the whole current Gambit runtime). But there are also other possible solutions to this problem, like implementing translation of continuations from compiled code into continuations in interpreted code, or one could write a bytecode VM as additional backend.
(Stepping is something one may want to control on a finer grained basis anyway: currently if you want the stepper to jump over procedures the solution is to just compile the module containing that procedure. Some better control may be useful. (Much of such control about program behaviour (which parts are being compiled how, etc.) can be part of the layer above the core, i.e. what we are calling "module systems".))
Christian.
Alex Sandro Queiroz e Silva wrote:
You would need every user to have a development toolchain
(compiler, assembler, linker etc.) on every user machine.
That's a different issue. It's a question of trading a dependency on gcc (or another C compiler) for a dependency on llvm.
(And I guess most sizable Scheme/Gambit based projects need a C compiler anyway to compile bindings to C libraries; so it would only matter for a target audience which *does* program in Scheme but *only* uses precompiled binary modules for C bindings.)
Christian.
Hallo,
Christian Jaeger wrote:
Alex Sandro Queiroz e Silva wrote:
You would need every user to have a development toolchain
(compiler, assembler, linker etc.) on every user machine.
That's a different issue. It's a question of trading a dependency on gcc (or another C compiler) for a dependency on llvm.
The code generator of LLVM is deployed as a library. It's a very different thing to deploy a library (linked statically or as shared objecyts) and deploying a complete development environment.
(And I guess most sizable Scheme/Gambit based projects need a C compiler anyway to compile bindings to C libraries; so it would only matter for a target audience which *does* program in Scheme but *only* uses precompiled binary modules for C bindings.)
Most Java/Python/Ruby/Lua etc. audience do not create their own FFI modules.
Cheers, -alex http://www.ventonegro.org/
Alex Sandro Queiroz e Silva wrote:
The code generator of LLVM is deployed as a library. It's a very
different thing to deploy a library (linked statically or as shared objecyts) and deploying a complete development environment.
In which way is it different? In every Linux system it would just be a dependency on a package, the rest is solved. In Windows or OS X it would probably a question of statically linking the libraries or bundling the libraries or binaries with your package. Why is bunding a binary more of a problem than bundling a library? (And you might have other binaries you'd want to bundle with your package anyway!)
It's a question of balance. Developing an llvm backend just because there is no nice script bundling up all your dependencies into one deployable blob would be energy invested in the wrong place.
Most Java/Python/Ruby/Lua etc. audience do not create their own FFI
modules.
But someone has to build those for them. Which is what I've been saying: you're looking at an audience, which installs precompiled modules for those bindings, *but* still want to develop Scheme code and want it to be compiled (i.e. not interpreted code). This is an audience that has access to precompiled modules (as binary packages, as in a Linux system). Are there any precompiled modules for Gambit in any Linux distro? Do you want to change that?
Christian.
Hallo,
Christian Jaeger wrote:
Alex Sandro Queiroz e Silva wrote:
The code generator of LLVM is deployed as a library. It's a very
different thing to deploy a library (linked statically or as shared objecyts) and deploying a complete development environment.
In which way is it different? In every Linux system it would just be a dependency on a package, the rest is solved. In Windows or OS X it would probably a question of statically linking the libraries or bundling the libraries or binaries with your package. Why is bunding a binary more of a problem than bundling a library? (And you might have other binaries you'd want to bundle with your package anyway!)
I can't believe you are serious. It is just not bundling a binary. It is bundling a compiler, a linker, an assembler, header files (ANSI C and OS-specific), libc, linker scripts etc. Can't you see this is different than shipping just a library to a client?
It's a question of balance. Developing an llvm backend just because there is no nice script bundling up all your dependencies into one deployable blob would be energy invested in the wrong place.
I never asked for a LLVM backend for Gambit-C. I only replied (and now I regret that) because someone said it would be no different at all of the current system. And it would.
Alex Sandro Queiroz e Silva wrote:
Hallo,
Christian Jaeger wrote:
Alex Sandro Queiroz e Silva wrote:
The code generator of LLVM is deployed as a library. It's a very
different thing to deploy a library (linked statically or as shared objecyts) and deploying a complete development environment.
In which way is it different? In every Linux system it would just be a dependency on a package, the rest is solved. In Windows or OS X it would probably a question of statically linking the libraries or bundling the libraries or binaries with your package. Why is bunding a binary more of a problem than bundling a library? (And you might have other binaries you'd want to bundle with your package anyway!)
I can't believe you are serious. It is just not bundling a binary.
It is bundling a compiler, a linker, an assembler, header files (ANSI C and OS-specific), libc, linker scripts etc. Can't you see this is different than shipping just a library to a client?
It's a question of balance. Developing an llvm backend just because there is no nice script bundling up all your dependencies into one deployable blob would be energy invested in the wrong place.
I never asked for a LLVM backend for Gambit-C. I only replied (and
now I regret that) because someone said it would be no different at all of the current system. And it would.
I really didn't mean to attack you, I simply want to know of solid reasons for creating an LLVM backend.
I'm an almost 100% linux users for 9 years now (and I have only had the issue of delivering a package to windows users once, which was a program in Visual Basic and actually required bundling of all DLL's to be running stable).
So I can't talk for anyone on such a system on what is required. I was replying on the assumption that by using PATH, LD_LIBRARY_PATH (or the Windows equivalents) and the gcc -I and -L options you can move all dependency files around at will, so what difficulty remains would be to find the dependencies (on Linux you could do it by using strace, or by including just the whole toolchain), so I guess it will be the size of the resulting package which makes you question my seriousness. I'm sure it would be worthwhile hearing about numbers or experience in this area.
If you need to package libc, then why wouldn't you need to package it also when using LLVM? (My reasoning being: if libc's interfaces (aka header files) are compatible with the header files you're providing, you don't need to include the libc binary, and if it isn't compatible with what assumptions the packaged-up system has (even if it's based on LLVM), then you'd need to include it anyway.)
Christian.
Hallo,
Christian Jaeger wrote:
I really didn't mean to attack you, I simply want to know of solid reasons for creating an LLVM backend.
Sorry if I came out harsh.
I'm an almost 100% linux users for 9 years now (and I have only had the issue of delivering a package to windows users once, which was a program in Visual Basic and actually required bundling of all DLL's to be running stable).
I really wish (seriously!) I didn't had to deal with Windows never again. At home I have a Linux desktop and a Macbook.
So I can't talk for anyone on such a system on what is required. I was replying on the assumption that by using PATH, LD_LIBRARY_PATH (or the Windows equivalents) and the gcc -I and -L options you can move all dependency files around at will, so what difficulty remains would be to find the dependencies (on Linux you could do it by using strace, or by including just the whole toolchain), so I guess it will be the size of the resulting package which makes you question my seriousness. I'm sure it would be worthwhile hearing about numbers or experience in this area.
I was talking indeed about size. The GHC distribution for Windows does exactly what you suggested. But it gets huge, and make downloads painful. Besides, it puts a great burden on the people that have to support and package this. And it introduces lots of other complexities. For instance, I am working in an application for the technical analysis of stocks (although it's stalled because of more urgent things). The good applications of this kind let the user enter their own technical indicators (usually in an ad-hoc scripting language) and let the user run them on the previous stock data to see how good they are. Generating a shared object for each run would quickly degenerate in lots of shared objects in the user hard-disk. I know this is fixable, but it's nevertheless an annoyance.
If you need to package libc, then why wouldn't you need to package it also when using LLVM? (My reasoning being: if libc's interfaces (aka header files) are compatible with the header files you're providing, you don't need to include the libc binary, and if it isn't compatible with what assumptions the packaged-up system has (even if it's based on LLVM), then you'd need to include it anyway.)
With MinGW, you have to link against the supplied libmsvcxx.a to use the Windows C runtime DLL, for instance.
Cheers, -alex http://www.ventonegro.org/
Thanks for the reply.
Alex Sandro Queiroz e Silva wrote:
I was talking indeed about size. The GHC distribution for Windows
does exactly what you suggested. ...
Interesting.
The good applications of this kind let the user enter their own technical indicators (usually in an ad-hoc scripting language) and let the user run them on the previous stock data to see how good they are. Generating a shared object for each run would quickly degenerate in lots of shared objects in the user hard-disk. I know this is fixable, but it's nevertheless an annoyance.
Yep. One idea is to create a hash value of the (expanded) source code s-expression (i.e. the final compiler input) and use that for the file name, so that you never recompile if the input is the same. (And clean up on the basis of atime or randomly.)
I certainly agree that compiling directly to memory may be sensible in such a case. There are things to keep in mind though: if you serialize closures or continuations from such code, you won't be able to deserialize them after a restart of the system (unless you first compile the same code (to memory) again -- or include the assembly code inside the serialization, which you don't really want to do). (This is something also in need of some care with the file based approach (modulesystem...). I just thought I'd mention it here.)
Christian.
I haven't looked at LLVM since its first release, and following your post I took a new look and see that there has been quite a lot of evolution. I think it would be interesting to have a LLVM back-end for Gambit, mainly because it would allow portable native code generation. Going to machine code (through LLVM) would avoid the use of trampolines (to implement tail calls), and this could have a significant impact on performance of multi-module programs. It would also have a significant impact on the compactness of the machine code generated (once again because the trampoline machinery could be avoided).
For fun I tried compiling Gambit with llvm-gcc on my Mac OS X machine using
% ./configure CC=llvm-gcc --enable-single-host % make
llvm-gcc is the gcc compiler version 4.2.1 with a code generator which goes through LLVM version 2.3.
The good news is that this works with the current Gambit v4.2.8 with only minor warnings (due to the configure script thinking the standard Mac OS X gcc is being used and passing it options that llvm-gcc knows nothing about).
The bad news is that the compile time is really high (10 minutes for some of the larger C files like lib/_num.c and lib/_io.c) and the execution speed if often lower than when using the standard gcc. Perhaps that's just because my standard gcc is version 4.0.1 and the compiler's optimization algorithms are different. Also, for an unknown reason, I/O is particularly slow. For your information I have made a table of benchmarks comparing the execution time for various settings (compiling with standard gcc and -O1, -O2, and -O3, and compiling with llvm-gcc). The table is at http://www.iro.umontreal.ca/~feeley/bench-llvm-gcc.html .
The poor performance obtained with llvm-gcc does not mean that a LLVM back-end for Gambit would yield poor performance. However it does point to some aspects which are a source of concern, and need to be looked into.
If anyone is interested in contributing to implement an LLVM back-end for Gambit, please send me an email. I can guide you through the steps required.
Marc
On 21-Sep-08, at 10:41 AM, Andrew I. Schein wrote:
Hi all -
I am wondering if there are any efforts underway to port gambit to a llvm back end? Would this provide opportunity to eliminate a trampoline, at least in certain cases?
-A
-- web: www.andrewschein.com
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On Mon, Sep 22, 2008 at 8:19 PM, Marc Feeley feeley@iro.umontreal.ca wrote:
I haven't looked at LLVM since its first release, and following your post I took a new look
As have I. I am glad of the reminder.
For fun I tried compiling Gambit with llvm-gcc on my Mac OS X machine
...
The good news is that this works
...
The bad news is that the compile time is really high (10 minutes for some of the larger C files like lib/_num.c and lib/_io.c) and the execution speed if often lower than when using the standard gcc.
Well a custom back-end would be using Gambit's high-level optimizations and LLVM's low-level ones. I wouldn't be at all surprised if the use of C as an intermediate language (however stylized) introduces inefficiencies.
If anyone is interested in contributing to implement an LLVM back-end for Gambit, please send me an email. I can guide you through the steps required.
I am indirectly interested, and may become actively so depending on what happens in the next couple of months (waiting on funding :) Please keep the list posted if anyone steps up.
david rush