See the original GDoc document http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en to have access to the nice drawing ;-). http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en
Erick
Git Notes
Following changes from a remote directory with custom modifications
Git provides tools which can be used to ease maintaining a set of patches in sync with an evolving project. The instrumentation of WebKit is an example of that kind of usage. The most interesting usage is the automatic application of modifications to newer code, when no conflicts arise. This can be done either by merging with newer commits or rebasing commits on top of newer ones. For a comparison between the two approches, see this blog post http://softwareswirl.blogspot.com/2009/04/truce-in-merge-vs-rebase-war.html. Rebasing will be preferred here because it allows an easier review of the commits by someone not familiar with the code by only requiring understanding how the changes are made related to a specific version of the master code.
However that kind of usage is potentially dangerous in a distributed version control system. It modifies the history of commits which renders merging between independently developed branches a lot harder when someone else has made commits on top of a rebased branch. Without care, it creates duplicates entries in the history. Even with care, it forces everyone with a copy of the branch to manually fix their history. See the official git-rebase documentation http://www.kernel.org/pub/software/scm/git/docs/git-rebase.html, especially the section "Recovering from upstream rebase".
By leaving the old branches into place, we should avoid having people manually fix their history. They will simply have to track an additional branch each time we rebase the changes. It also has the effect of leaving previous versions available, should we need to reproduce the traces we made in the past (as long as the website's javascript code doesn't change significantly...).
In the case of WebKit, I suggest having branches with the following naming convention: "prof_svn<rev nb>" where <rev nb> is the svn revision number mentioned in the commit message of the base git commit. In the following example:
The profiler should have a branch name of "prof_svn40". After some time, when we want to profile a more recent version of WebKit we use the following commands:
git checkout master git pull upstream master # Add the latest changes in the master branch git log -n 1 # Retrieve the svn revision number from the commit message git checkout prof_svn40 # Switch to the profiler branch git checkout -b prof_svn42 # Create a new branch containing the same commits git rebase master # Rebase all the commits on top of the HEAD of master
We should end up with the following repository:
Note that further modifications of older branches will need to be ported manually to newer versions so we should restrain from doing this.
Google Docs makes it easy to create, store and share online documents, spreadsheets and presentations. Google Docs logo http://docs.google.com
Afficher les réponses par date
Ok, well I went through the rebase in a new branch. Unfortunately, it seems that there have been changes in the WebKit interpreter which break my code, and require a pretty significant refactoring of some things.
Notably, tracking object types may now require a multiple-pass analysis, because, for example, it seems it's no longer possible to know what object a constructor constructs until the constructor returns the said object, at which point the said constructor may already have touched this object several times.
I'm not quite sure what to do at this point. Try to keep track of the WebKit changes, possibly complicating our code, or stick to an older version of WebKit, and hope that when we actually benchmark real websites, our WebKit branch is still able to run all recent websites properly...
I think we need to figure out what we want to do with this project exactly. If we want to actually publish an article about this. Do we want to publish an article reasonably soon (eg: this year)? How exactly do we want to track property accesses on objects? What kind of statistics do we want to gather, specifically?
- Maxime
See the original GDoc document http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en to have access to the nice drawing ;-). http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en
Erick
Git Notes
Following changes from a remote directory with custom modifications
Git provides tools which can be used to ease maintaining a set of patches in sync with an evolving project. The instrumentation of WebKit is an example of that kind of usage. The most interesting usage is the automatic application of modifications to newer code, when no conflicts arise. This can be done either by merging with newer commits or rebasing commits on top of newer ones. For a comparison between the two approches, see this blog post http://softwareswirl.blogspot.com/2009/04/truce-in-merge-vs-rebase-war.html. Rebasing will be preferred here because it allows an easier review of the commits by someone not familiar with the code by only requiring understanding how the changes are made related to a specific version of the master code.
However that kind of usage is potentially dangerous in a distributed version control system. It modifies the history of commits which renders merging between independently developed branches a lot harder when someone else has made commits on top of a rebased branch. Without care, it creates duplicates entries in the history. Even with care, it forces everyone with a copy of the branch to manually fix their history. See the official git-rebase documentation http://www.kernel.org/pub/software/scm/git/docs/git-rebase.html, especially the section "Recovering from upstream rebase".
By leaving the old branches into place, we should avoid having people manually fix their history. They will simply have to track an additional branch each time we rebase the changes. It also has the effect of leaving previous versions available, should we need to reproduce the traces we made in the past (as long as the website's javascript code doesn't change significantly...).
In the case of WebKit, I suggest having branches with the following naming convention: "prof_svn<rev nb>" where <rev nb> is the svn revision number mentioned in the commit message of the base git commit. In the following example:
The profiler should have a branch name of "prof_svn40". After some time, when we want to profile a more recent version of WebKit we use the following commands:
git checkout master git pull upstream master # Add the latest changes in the master branch git log -n 1 # Retrieve the svn revision number from the commit message git checkout prof_svn40 # Switch to the profiler branch git checkout -b prof_svn42 # Create a new branch containing the same commits git rebase master # Rebase all the commits on top of the HEAD of master
We should end up with the following repository:
Note that further modifications of older branches will need to be ported manually to newer versions so we should restrain from doing this.
Google Docs makes it easy to create, store and share online documents, spreadsheets and presentations. Google Docs logo http://docs.google.com _______________________________________________ Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Le 10-06-27 1:01 , chevalma@iro.umontreal.ca a écrit :
I'm not quite sure what to do at this point. Try to keep track of the WebKit changes, possibly complicating our code, or stick to an older version of WebKit, and hope that when we actually benchmark real websites, our WebKit branch is still able to run all recent websites properly...
I would suggest sticking to the old version of Webkit until it is absolutely necessary to upgrade. My intuition is that it will be more work to try to maintain it up-to-date every month or so, than doing it once or twice per year, even considering the scope of the changes on Webkit.
Hopefully, in the meantime, we might find out that integrating our engine in a browser will be less work than maintaining the current profiler.
Erick
See the original GDoc document http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en to have access to the nice drawing ;-). http://docs.google.com/Doc?docid=0ARkLUElWcqxDZGdkOWo3c2ZfOTloamhkNm1ocw&hl=en
Erick
Git Notes
Following changes from a remote directory with custom modifications
Git provides tools which can be used to ease maintaining a set of patches in sync with an evolving project. The instrumentation of WebKit is an example of that kind of usage. The most interesting usage is the automatic application of modifications to newer code, when no conflicts arise. This can be done either by merging with newer commits or rebasing commits on top of newer ones. For a comparison between the two approches, see this blog post http://softwareswirl.blogspot.com/2009/04/truce-in-merge-vs-rebase-war.html. Rebasing will be preferred here because it allows an easier review of the commits by someone not familiar with the code by only requiring understanding how the changes are made related to a specific version of the master code.
However that kind of usage is potentially dangerous in a distributed version control system. It modifies the history of commits which renders merging between independently developed branches a lot harder when someone else has made commits on top of a rebased branch. Without care, it creates duplicates entries in the history. Even with care, it forces everyone with a copy of the branch to manually fix their history. See the official git-rebase documentation http://www.kernel.org/pub/software/scm/git/docs/git-rebase.html, especially the section "Recovering from upstream rebase".
By leaving the old branches into place, we should avoid having people manually fix their history. They will simply have to track an additional branch each time we rebase the changes. It also has the effect of leaving previous versions available, should we need to reproduce the traces we made in the past (as long as the website's javascript code doesn't change significantly...).
In the case of WebKit, I suggest having branches with the following naming convention: "prof_svn<rev nb>" where<rev nb> is the svn revision number mentioned in the commit message of the base git commit. In the following example:
The profiler should have a branch name of "prof_svn40". After some time, when we want to profile a more recent version of WebKit we use the following commands:
git checkout master git pull upstream master # Add the latest changes in the master branch git log -n 1 # Retrieve the svn revision number from the commit message git checkout prof_svn40 # Switch to the profiler branch git checkout -b prof_svn42 # Create a new branch containing the same commits git rebase master # Rebase all the commits on top of the HEAD of master
We should end up with the following repository:
Note that further modifications of older branches will need to be ported manually to newer versions so we should restrain from doing this.
Google Docs makes it easy to create, store and share online documents, spreadsheets and presentations. Google Docs logohttp://docs.google.com _______________________________________________ Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
Tachyon-list mailing list Tachyon-list@iro.umontreal.ca https://webmail.iro.umontreal.ca/mailman/listinfo/tachyon-list
I would suggest sticking to the old version of Webkit until it is absolutely necessary to upgrade. My intuition is that it will be more work to try to maintain it up-to-date every month or so, than doing it once or twice per year, even considering the scope of the changes on Webkit.
I guess we can stick to the old version until it's necessary to upgrade and redo the instrumentation if necessary.
Hopefully, in the meantime, we might find out that integrating our engine in a browser will be less work than maintaining the current profiler.
This is what I would like as well, I think that instrumenting through our own engine will probably be 10 times easier. However, we don't know how much time it will take until our system can run in a real browser and support all of ECMAScript 5. Let's hope it's less than a year...
- Maxime
Marc, I saw you made some commits on git, one with the comment:
"Implement AST walker and free-variable analysis (for closures)"
Is the actual closure conversion done? Is your code ready for me to work on the AST->IR conversion.
I'm just asking because I assumed this wasn't the case and have been working on other things instead.
- Maxime
On 2010-06-27, at 11:47 PM, chevalma@iro.umontreal.ca wrote:
Marc, I saw you made some commits on git, one with the comment:
"Implement AST walker and free-variable analysis (for closures)"
Is the actual closure conversion done? Is your code ready for me to work on the AST->IR conversion.
I'm just asking because I assumed this wasn't the case and have been working on other things instead.
Yes it is ready for you. There is no closure conversion per se. Each FunctionExpr node has a free_vars field which contains a hash map of all the free variables of the function (the variables from the enclosing scope which are referred to by the function). Those are the variables that need to be stored in the closure object. Depending on the implementation strategy, you may want to remove the top level variables from the free_vars (if there is only one global object... note that this may not be the case in a multi-thread JavaScript). For now just remove them... but keep in mind if tachyon ever goes multi-thread.
You'll probably want to do a mutation analysis also. The reason this is not done at the AST level is that you have said you want to do optimizations at the IR level. Some optimizations (such as dead-code elimination) may remove some mutations, and thus, some variables are no longer mutable after optimization. Also, some variables may not be free anymore either. This is why I feel that doing optimizations at both the AST and IR levels is pointless (redundant/incomplete). But because the IR representation is low-level, it will be hard to do some optimizations (we've had this discussion before... I remain skeptical about the depth of transformations/optimizations we can do at the IR level).
Marc