I'm planning to move the source-code repository from Mercurial to git in the next few days. This change is prompted by slugish performance of Mercurial on some operations I use frequently, such as "hg diff", on the current repository.
If you plan on developing with Gambit, in particular you want "make update" to recompile your Gambit using the most up-to-date patches, then please get ready for the switch by making sure git is installed on your development workstation. Git is available here: http://git.or.cz/ . There are binaries for many OSes including Mac OS X and Windows. It turns out the "hg" and "git" commands have almost the same syntax, so it should be a fairly easy transition for most Mercurial users.
Moreover, up to now Gambit has been released in source form in a single compressed tarball, e.g. gambc-v4_2_9.tgz, which is bloated due to the inclusion of all the revision-control history. In the next release Gambit will be released in 2 tarballs: gambc-v4_2_10.tgz and gambc-v4_2_10-devel.tgz . The "devel" variant which contains the revision-control history should be used by developers. The non-devel variant is about half the size (7 MB instead of 14 MB).
Marc
Afficher les réponses par date
I've been wondering... Would it be possible to separate the "extra" part from the main one? I mean the examples... I believe that they are quite static compared to the rest of the code, and thus, take some extra space in the tarballs (not that space is a problem, but in my opinion, they'd better be distinct from the compiler itself).
Perhaps gambit-doc or gambit-examples or gambit-tools (which would contain gambit.el and some extra features, such as SRFI-1, and other useful functions (perhaps with some gambit-specific code)
Cheers,
Adrian
2008/10/8 Marc Feeley feeley@iro.umontreal.ca
to the inclusion of all the revision-control history. In the next release Gambit will be released in 2 tarballs: gambc-v4_2_10.tgz and gambc-v4_2_10-devel.tgz . The "devel" variant which contains the revision-control history should be used by developers. The non-devel variant is about half the size (7 MB instead of 14 MB).
Marc
Gambit-list mailing list Gambit-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/gambit-list
On 8-Oct-08, at 4:51 PM, Adrien Piérard wrote:
I've been wondering... Would it be possible to separate the "extra" part from the main one? I mean the examples... I believe that they are quite static compared to the rest of the code, and thus, take some extra space in the tarballs (not that space is a problem, but in my opinion, they'd better be distinct from the compiler itself).
Perhaps gambit-doc or gambit-examples or gambit-tools (which would contain gambit.el and some extra features, such as SRFI-1, and other useful functions (perhaps with some gambit-specific code)
That doesn't save much space (about 0.5 MB uncompressed). Moreover I prefer to have a self contained distributions where users can easily try out some real examples with "make examples" rather than downloading separately.
Marc
P.S. did you know that the benchmarks misc/bench.tgz contains a copy of the King James Bible? Perhaps I could remove the benchmarks... on the other hand it is an inconspicuous way to spread the good word.
"MF" == Marc Feeley feeley@iro.umontreal.ca writes:
MF> It turns out the "hg" and "git" commands have almost the same MF> syntax, so it should be a fairly easy transition for most MF> Mercurial users.
No problems there, I only heard about mercurial because of Gambit ;) The question is will "make update" still update the source tree? (i.e. do I have to change my cron scripts?)
I applaud this change, and agree with you (from the "user perspective") that having the examples and documentation is essential.
Thanks, Joel
On 9-Oct-08, at 9:47 AM, Joel J. Adamson adamsonj@email.unc.edu wrote:
MF> It turns out the "hg" and "git" commands have almost the same MF> syntax, so it should be a fairly easy transition for most MF> Mercurial users.
No problems there, I only heard about mercurial because of Gambit ;) The question is will "make update" still update the source tree? (i.e. do I have to change my cron scripts?)
Yes, "make update" will have the same behavior as before.
Marc
On Thu, Oct 9, 2008 at 2:47 PM, Joel J. Adamson adamsonj@email.unc.edu < adamsonj@email.unc.edu> wrote:
No problems there, I only heard about mercurial because of Gambit ;)
Me too :)
However, I have since fallen in love with mercurial for a large number of reasons and am using it daily on a professional basis since my corporate sponsors mandate a wholly inadequate 90s-era VCS.
I am a little concerned about Git support on Windows, though. This may be a golden-age of VCSes, but primarily on *nix. And please don't say "cygwin" - it's just *too* painful. The GnuWin32 project has brought like 99% of the daily development food chain onto the Windows platform (which makes Win-dev barely palatable) and Mercurial is one of the best VCS options that 'just works' cross-platform out of the box.
Not that I ever build gambit from source, mind you. Just trying to keep my options open :)
david rush
On 10/10/08, David Rush kumoyuki@gmail.com wrote:
I am a little concerned about Git support on Windows, though. This may be a golden-age of VCSes, but primarily on *nix. And please don't say "cygwin" - it's just *too* painful. The GnuWin32 project has brought like 99% of the daily development food chain onto the Windows platform (which makes Win-dev barely palatable) and Mercurial is one of the best VCS options that 'just works' cross-platform out of the box.
http://code.google.com/p/msysgit/
I wonder whether it would be a good idea, and good occasion to realize it, to move source files and generated files into separate repositories and 'link' those together using the git submodule feature.
Expected advantages:
- no clutter when looking through the history (can possibly be mitigated by constraining git log, git diff etc. to the non-generated paths only, although I don't think this is possible (cleanly) with the current directory structure); the same holds true for using "git format-patch" (one wouldn't usually want to include the generated files in diffs sent to the mailing list)
- when merging branches, there will usually be no need to deal with merge conflicts in the generated files (one would just regenerate them instead)
- [especially for files being generated not by Gambit itself (for example "configure"),] the files can be regenerated by differing [external] software versions without having to deal with those superfluous changes in the source repository.
By still committing the generated files--to a different submodule--Gambit can still be updated through Git alone, and the possible advantage of tracking the generated files to see the effects of changes in compiler sources can still be had.
Expected disadvantages:
- all generated files need to reside in a separate directory structure; e.g. the file $BASEDIR/lib/_io.c would have to be at a place like $BASEDIR/build/lib/_io.c instead, where build/ is the submodule taking all generated files; since the "configure" file is expected to reside at the toplevel, I guess this would require that "make update" copies it from $BASEDIR/build/configure to $BASEDIR/configure (assuming that one cannot use a symlink because of portability reasons).
- to commit the generated files, a separate step is necessary ("cd $otherrepo; git commit -a", or maybe easier create a "make commit_generated" make target?)
- to make this work with the "source" repository residing at the toplevel, the Git superproject repository (of which the "source" and "build" repositories are submodules) would need to reside in a non-standard directory, like $BASEDIR/.gitsuperproject/ instead of the usual .git/, and using the GIT_DIR environment variable to access it, although this can probably be handled by make targets (i.e. "make update" would set GIT_DIR=$BASEDIR/.gitsuperproject when calling "git submodule update").
- there may be some cases to flesh out; like, should "make update" really call "git submodule update" (which simply sets the submodules to the reference given by the superproject, throwing away changes done by the user in the submodules (they can be recovered from the git reflog, but may still be a surprise)) or should it run "git pull" in each submodule instead?
I thought I'd bring this up now because if package maintainers need to adapt some things anyway, that may be a good time to do it now. (There's even the possibility to split the converted Mercurial repository into the source + build parts in retrospect now, which won't be possible anymore later on (without changing the sha1 sums of the whole Git history with the associated breakage of existing clones), although that may not be important.)
I'm willing to help in the effort, although I don't know the build tools (autoconf and make) and their use in the setup well, so I would probably be quite a bit lost when doing it alone.
Christian.
Hi Christian,
The idea of using two separate repositories for source and generated source is interesting. I would like to bring this to git mailing list, they may provide insightul comments for your idea or even other approaches.
Background for Git people: gambit-c was previously stored in mercurial. The main source is in gambit-c (a Scheme implementation or a Lisp dialect). *.scm files generate *.c, which will be compiled by gcc as usual. Both *.scm and generated *.c are now stored in mercurial. Gambit-C maintainers have recently decided to move to Git.
On 10/15/08, Christian Jaeger christian@pflanze.mine.nu wrote:
I wonder whether it would be a good idea, and good occasion to realize it, to move source files and generated files into separate repositories and 'link' those together using the git submodule feature.
Expected advantages:
- no clutter when looking through the history (can possibly be mitigated
by constraining git log, git diff etc. to the non-generated paths only, although I don't think this is possible (cleanly) with the current directory structure); the same holds true for using "git format-patch" (one wouldn't usually want to include the generated files in diffs sent to the mailing list)
- when merging branches, there will usually be no need to deal with
merge conflicts in the generated files (one would just regenerate them instead)
- [especially for files being generated not by Gambit itself (for
example "configure"),] the files can be regenerated by differing [external] software versions without having to deal with those superfluous changes in the source repository.
By still committing the generated files--to a different submodule--Gambit can still be updated through Git alone, and the possible advantage of tracking the generated files to see the effects of changes in compiler sources can still be had.
Expected disadvantages:
- all generated files need to reside in a separate directory structure;
e.g. the file $BASEDIR/lib/_io.c would have to be at a place like $BASEDIR/build/lib/_io.c instead, where build/ is the submodule taking all generated files; since the "configure" file is expected to reside at the toplevel, I guess this would require that "make update" copies it from $BASEDIR/build/configure to $BASEDIR/configure (assuming that one cannot use a symlink because of portability reasons).
- to commit the generated files, a separate step is necessary ("cd
$otherrepo; git commit -a", or maybe easier create a "make commit_generated" make target?)
- to make this work with the "source" repository residing at the
toplevel, the Git superproject repository (of which the "source" and "build" repositories are submodules) would need to reside in a non-standard directory, like $BASEDIR/.gitsuperproject/ instead of the usual .git/, and using the GIT_DIR environment variable to access it, although this can probably be handled by make targets (i.e. "make update" would set GIT_DIR=$BASEDIR/.gitsuperproject when calling "git submodule update").
- there may be some cases to flesh out; like, should "make update"
really call "git submodule update" (which simply sets the submodules to the reference given by the superproject, throwing away changes done by the user in the submodules (they can be recovered from the git reflog, but may still be a surprise)) or should it run "git pull" in each submodule instead?
I thought I'd bring this up now because if package maintainers need to adapt some things anyway, that may be a good time to do it now. (There's even the possibility to split the converted Mercurial repository into the source + build parts in retrospect now, which won't be possible anymore later on (without changing the sha1 sums of the whole Git history with the associated breakage of existing clones), although that may not be important.)
I'm willing to help in the effort, although I don't know the build tools (autoconf and make) and their use in the setup well, so I would probably be quite a bit lost when doing it alone.
Christian.
"Nguyen Thai Ngoc Duy" pclouds@gmail.com writes:
Hi Christian,
The idea of using two separate repositories for source and generated source is interesting. I would like to bring this to git mailing list, they may provide insightul comments for your idea or even other approaches.
I think the first question is: do you (and why) need to use a version control system for generated files?
Matthieu Moy venit, vidit, dixit 15.10.2008 17:30:
"Nguyen Thai Ngoc Duy" pclouds@gmail.com writes:
Hi Christian,
The idea of using two separate repositories for source and generated source is interesting. I would like to bring this to git mailing list, they may provide insightul comments for your idea or even other approaches.
I think the first question is: do you (and why) need to use a version control system for generated files?
I guess we can take "yes" for granted for the first part ;) As for the why: In cases like this one it's interesting to compare (read: diff) the output generated by different versions of the input.
I wonder whether a clever use of "excludes" and GIT_DIR would allow tracking the different filesets in the same dir, but using different repos. I'm just afraid it's a fragile setup, in the sense that it relies on config stuff which is not tracked (and thus not reproduced automatically on clone).
Michael
Michael J Gruber wrote:
I wonder whether a clever use of "excludes" and GIT_DIR would allow tracking the different filesets in the same dir, but using different repos. I'm just afraid it's a fragile setup, in the sense that it relies on config stuff which is not tracked (and thus not reproduced automatically on clone).
I expect that using a superproject repository to tie together the two repositories is good and necessary because it is the link that allows to specify which commit in the repo of generated files belongs together with a commit in the repo of source files. So just using two separate repositories without making them submodules of a superproject does not seem to be a good idea to me.
Once there is a superproject repository, one could also commit config files of the submodules into it (I'm not sure what that will buy though--.gitignore is outside and can committed anyway, at least as long as not both repositories are overlaid as you suggest).
You're probably right that strictly speaking, there is no need to move generated files out into a separate directory tree; but I think doing the move would be worthwhile since it takes away one level of complexity (you can then access the build/.git repository without the need of setting GIT_DIR), and because it may be a good idea anyway (for example it will be easier to grep the sources without getting hits from the generated files). [Also, the exclude patterns wouldn't be easy, as we couldn't really just exclude all *.c files from the view of the source repository, since there are also some hand-crafted ones; the excludes would need full paths which would have to be kept up to date manually, unless we wanted to live with the fact that newly created manual .c files would be added using "git add -f".]
Christian.
I wrote:
Michael J Gruber wrote:
I wonder whether a clever use of "excludes" and GIT_DIR would allow tracking the different filesets in the same dir, but using different repos. I'm just afraid it's a fragile setup, in the sense that it relies on config stuff which is not tracked (and thus not reproduced automatically on clone).
I expect that using a superproject repository to tie together the two repositories is good and necessary because it is the link that allows to specify which commit in the repo of generated files belongs together with a commit in the repo of source files. So just using two separate repositories without making them submodules of a superproject does not seem to be a good idea to me.
(In the meantime I've read the following pages: http://nopugs.com/2008/09/06/ext-tutorial http://nopugs.com/2008/09/04/why-ext http://flavoriffic.blogspot.com/2008/05/managing-git-submodules-with-gitrake... (a post to the latter article suggests to use subtree merging instead, but that would be a very bad match for our use case; the mentioned problem of merging of the git superproject makes me think, though--the superproject could be updated only by the one person doing the publish onto the public repository, but then it leaves the problem of handling merges by developers completely unsolved.) )
I'm starting to think that maybe a better idea than the superproject+2submodules approach would be just using the two repositories ("source" + "build"), and storing the linking information inside the "build" repository (by adding the source repository commitid to every commit message in the build repository [or using tags, but that doesn't seem a better idea]), and use a program that is able to check out the matching "build" repository for a given "source" repository checkout.
I'm willing to write this program (let's call it "intergit-find-matching-commit-in" for the purpose of this email); question: which language to write it in, is Perl good? (C would be a hassle for Windows users because of the C compiler requirement; shell may be too limited.)
Description of the workings in more detail:
- one would work with the "source" repository just as one would with any project only employing one repository; do some changes to the project, commit them, test them (includes regeneration of generated files);
- once in a while one would commit the current generated files in the "build" repository; by either (a) using a make target (like "make commit_generated") which runs something like
eval "cd build; git commit -m 'generated files for source repository commit `git rev-parse HEAD`'"
or (b) setting up a build/.git/hooks/commit-msg script which appends 'generated files for source repository commit `git rev-parse HEAD`' line to the commit message given from running "cd build; git commit -a" manually.
- for publication, one would push both the "source" as well as the "build" repository (i.e. "cd build; git push; cd ..; git push")
- for checkout (our "make update" make target), about the following would happen:
git pull eval "(cd build; git checkout `intergit-find-matching-commit-in build`)"
where "intergit-find-matching-commit-in build" would first refresh an index of the links (iterate over all unseen commits, parse commit messages for /source repository commit (\w+)/ and store $1 => $commitid_in_build_repo mappings in the index), then go through "git log --pretty=format:%H" (should I also specify --topo-order (or --date-order)?) looking up the commitids in the index, stopping at the first match and outputting the mapped $commitid_in_build_repo.
This way, the "latest" or "probably best-matching" corresponding commit in the "build" repo can always be found, even if the "source" repo is ahead, which should allow building the compiler even if none is previously installed. This workflow seems more natural than the superproject+submodules approach, and it seems to entail no hassle with merge issues (only the "source" repo really needs proper merging; merging the "build" repo would only be worthwhile for maintaining the history, and as mentioned if there are conflicts, one would probably usually just regenerate the files there; there's no need to maintain linking info (with associated merge etc issues) in a separate entity (superproject) anymore, and during development, commits to the "build" repo need only be done if backwards-incompatible changes have been introduced).
Does anyone else think this is sane/interesting? Should I go ahead implementing this? Any comments, like on how the interface of the intergit-find-matching-commit-in tool should look like?
Christian.
On Thu, Oct 16, 2008 at 2:00 PM, Christian Jaeger christian@pflanze.mine.nu wrote:
(In the meantime I've read the following pages: http://nopugs.com/2008/09/06/ext-tutorial http://nopugs.com/2008/09/04/why-ext http://flavoriffic.blogspot.com/2008/05/managing-git-submodules-with-gitrake... (a post to the latter article suggests to use subtree merging instead, but that would be a very bad match for our use case; the mentioned problem of merging of the git superproject makes me think, though--the superproject could be updated only by the one person doing the publish onto the public repository, but then it leaves the problem of handling merges by developers completely unsolved.) )
I'm starting to think that maybe a better idea than the superproject+2submodules approach would be just using the two repositories ("source" + "build"), and storing the linking information inside the "build" repository (by adding the source repository commitid to every commit message in the build repository [or using tags, but that doesn't seem a better idea]), and use a program that is able to check out the matching "build" repository for a given "source" repository checkout.
I'm willing to write this program (let's call it "intergit-find-matching-commit-in" for the purpose of this email); question: which language to write it in, is Perl good? (C would be a hassle for Windows users because of the C compiler requirement; shell may be too limited.)
Description of the workings in more detail:
- one would work with the "source" repository just as one would with any
project only employing one repository; do some changes to the project, commit them, test them (includes regeneration of generated files);
- once in a while one would commit the current generated files in the
"build" repository; by either (a) using a make target (like "make commit_generated") which runs something like
eval "cd build; git commit -m 'generated files for source repository commit `git rev-parse HEAD`'"
or (b) setting up a build/.git/hooks/commit-msg script which appends 'generated files for source repository commit `git rev-parse HEAD`' line to the commit message given from running "cd build; git commit -a" manually.
- for publication, one would push both the "source" as well as the "build"
repository (i.e. "cd build; git push; cd ..; git push")
- for checkout (our "make update" make target), about the following would
happen:
git pull eval "(cd build; git checkout `intergit-find-matching-commit-in build`)"
where "intergit-find-matching-commit-in build" would first refresh an index of the links (iterate over all unseen commits, parse commit messages for /source repository commit (\w+)/ and store $1 => $commitid_in_build_repo mappings in the index), then go through "git log --pretty=format:%H" (should I also specify --topo-order (or --date-order)?) looking up the commitids in the index, stopping at the first match and outputting the mapped $commitid_in_build_repo.
This way, the "latest" or "probably best-matching" corresponding commit in the "build" repo can always be found, even if the "source" repo is ahead, which should allow building the compiler even if none is previously installed. This workflow seems more natural than the superproject+submodules approach, and it seems to entail no hassle with merge issues (only the "source" repo really needs proper merging; merging the "build" repo would only be worthwhile for maintaining the history, and as mentioned if there are conflicts, one would probably usually just regenerate the files there; there's no need to maintain linking info (with associated merge etc issues) in a separate entity (superproject) anymore, and during development, commits to the "build" repo need only be done if backwards-incompatible changes have been introduced).
Does anyone else think this is sane/interesting? Should I go ahead implementing this? Any comments, like on how the interface of the intergit-find-matching-commit-in tool should look like?
It looks like the html and man branches of git.git.
http://git.kernel.org/?p=git/git.git;a=shortlog;h=html http://git.kernel.org/?p=git/git.git;a=shortlog;h=man
They are automatically generated when Junio pushes the branches to kernel.org. Afterwards you can do a "make quick-install-html" and install the preformated html pages from these branches. They are generated with the dodoc.sh script from the todo branch in git.git (look inside for instructions):
http://git.kernel.org/?p=git/git.git;a=blob_plain;f=dodoc.sh;hb=todo
HTH, Santi
Santi Béjar wrote:
It looks like the html and man branches of git.git.
http://git.kernel.org/?p=git/git.git;a=shortlog;h=html http://git.kernel.org/?p=git/git.git;a=shortlog;h=man
They are automatically generated when Junio pushes the branches to kernel.org. Afterwards you can do a "make quick-install-html" and install the preformated html pages from these branches. They are generated with the dodoc.sh script from the todo branch in git.git (look inside for instructions):
http://git.kernel.org/?p=git/git.git;a=blob_plain;f=dodoc.sh;hb=todo
This script only generates the html / man branches, it doesn't help find the right version for a given git version, right?
The differences are:
- the html / man branches have a strictly linear history and are centrally maintained. This solves the distribution issue for end users. But while developping the compiler, the developers may need to go back in the history of their own development (e.g. when the current compiler doesn't work anymore), and the suspected usefulness of being able to see and track differences in the generated code also isn't available for a strictly central approach.
- the script above is only for creating and committing the derived files, in a hook similar to the one I suggested in build/.git/hooks/commit-msg; this is the "cd build; git commit -m 'generated files for source repository commit
`git rev-parse HEAD`'" part; the more interesting part comes from
automatically finding the right commit in the generated branches for a given source commit. This is what I intend to solve with the "intergit-find-matching-commit-in" script. Said in a simpler way: the git html / man branches do not offer automatically resolvable linking.
Christian.
On Thu, Oct 16, 2008 at 2:32 PM, Christian Jaeger christian@pflanze.mine.nu wrote:
Santi Béjar wrote:
It looks like the html and man branches of git.git.
http://git.kernel.org/?p=git/git.git;a=shortlog;h=html http://git.kernel.org/?p=git/git.git;a=shortlog;h=man
They are automatically generated when Junio pushes the branches to kernel.org. Afterwards you can do a "make quick-install-html" and install the preformated html pages from these branches. They are generated with the dodoc.sh script from the todo branch in git.git (look inside for instructions):
http://git.kernel.org/?p=git/git.git;a=blob_plain;f=dodoc.sh;hb=todo
This script only generates the html / man branches, it doesn't help find the right version for a given git version, right?
Right, one script to generate and one to get the right version.
The differences are:
- the html / man branches have a strictly linear history
Yes, because in this case it is not needed to replicate the whole history, but it could be improved.
and are centrally maintained. This solves the distribution issue for end users. But while developping the compiler, the developers may need to go back in the history of their own development (e.g. when the current compiler doesn't work anymore), and the suspected usefulness of being able to see and track differences in the generated code also isn't available for a strictly central approach.
So, you can divide the problem in two: (a) generated files in the remote repositories (these can be generated automatically on the server or in a dedicated server) (b) local generated files for local commits. If both follow the same format to specify the original commit you can use the same script to get it.
- the script above is only for creating and committing the derived files, in
a hook similar to the one I suggested in build/.git/hooks/commit-msg; this is the "cd build; git commit -m 'generated files for source repository commit
`git rev-parse HEAD`'" part; the more interesting part comes from automatically finding the right commit in the generated branches for a given source commit. This is what I intend to solve with the "intergit-find-matching-commit-in" script. Said in a simpler way: the git html / man branches do not offer automatically resolvable linking.
They offer this (Autogenerated HTML docs for v1.6.0.2-530-g67faa) but there is no script around it.
My point was that there are other project keeping generated files (and sometimes I would like it too), so you can see what they are doing. At the end, maybe, you system could be usefull for them also.
Santi
Matthieu Moy wrote:
I think the first question is: do you (and why) need to use a version control system for generated files?
The project in question is a self-hosting compiler which compiles to C as an intermediary language. Providing the generated C files to users makes installation easy (it avoids the bootstrapping issue). So it's more 'severe' of an issue than just one of for example generating documentation files using a 3rd-party tool.
What may make matters worse, is that there are interdependencies between a number of hand-written C files and the generated files, so it's not always possible to use an older compiler version to reproduce the generated C files for a newer compiler; so if you want to merge newer compiler sources, you may also need the generated files, at least if you want that without fuss. So, there is always a need to somehow transmit the generated files too. I guess that this is easier than code the system in a way to always allow backwards compatibility (I haven't worked on the compiler itself yet, so this is a guess and may need confirmation).
Apart from that, I've found it useful (in another project, writing a document translator) to keep generated files in a VCS (Git) as well (I checked them into the *same* repository as the translator source, even if it felt ugly (for the previously mentioned reasons)), as then when I changed the translator, I could easily see where it had effect on the generated output. It can even serve as a debugging help kind of like a test suite does. This may be the case here, too (again, I'm guessing here).
How are other compiler projects which are bootstrapping via C dealing with this?
Christian.