Stefan pushed to branch report/els-2017 at Stefan / Typer
Commits: dd6ff23d by Stefan Monnier at 2017-01-30T02:00:51-05:00 -
- - - - -
2 changed files:
- paper.tex - refs.bib
Changes:
===================================== paper.tex ===================================== --- a/paper.tex +++ b/paper.tex @@ -132,7 +132,7 @@ syntactic malleability of Lisp by relying on the traditional Lisp-style S-expressions and macros.
Its main tools to this end are the use of an infix notation for -S-expressions, as well as the use of a pure type system to reduce the number +S-expressions, as well as the use of a Pure Type System to reduce the number of syntactic categories and generally make ``everything'' first-class. \end{abstract}
@@ -221,7 +221,7 @@ mixfix elements. The parser is still very primitive, since it uses an operator precedence grammar~\cite{Floyd63}, but is already powerful enough to handle a syntax that will feel familiar to ML and Haskell users.
-Typer's core language is based on a pure type system~\cite{Barendregt91b}, +Typer's core language is based on a Pure Type System~\cite{Barendregt91b}, so as to use a single syntactic category for types and expressions. More specifically, its core language is similar to that of proof assistants such as Coq~\cite{Coq00} and can also be used to write logical propositions @@ -382,12 +382,15 @@ To define a new datatype to represent singly linked lists you can write: List : Type -> Type; type List (a : Type) | nil - | cons a (List a); + | cons (hd : a) (tl : List a); \end{verbatim} %% -Notice that we added a type annotation before the datatype definition: this -is currently needed as a forward declaration otherwise the recursive use of -\id{List} inside its definition is rejected. +where \id{hd} and \id{tl} are field names. We could have written just +\verb+cons a (List a)+ instead to keep the fields anonymous. + +Notice that we added a type annotation for \id{List} before the datatype +definition: this is currently needed as a forward declaration otherwise the +recursive use of \id{List} inside its definition is rejected.
More importantly, notice that this forward type declaration of \id{List} uses the same syntax as the previous forward declaration of the function @@ -727,32 +730,139 @@ it was natural and important to be able to distinguish those two cases. \end{array} } \end{displaymath} - \label{fig:Lexp} \caption{Definition of core $\lambda$-expressions} + \label{fig:Lexp} \end{figure} }
\FigLexp
-No hard coded names: just initial bindings of constructs to special forms -and primitive functions. +Elaboration is the phase in Typer's compiler which turns an S-expression +into what we call a $\lambda$-expression, whose (simplified) representation is +shown in Fig.~\ref{fig:Lexp}. Notice that \id{Lexp} represents both what is +usually considered \emph{expressions} (such as \id{let}, \id{fun}, ...) as +well as what is usually considered as \emph{types} +(e.g.~$\id{arw}~x~t_1~t_2$ which represents the function type +\texttt{(x~:~t$_1$) -> t$_2$}, or \id{adt} which is the representation of an +abstract data type) since this core language is a Pure Type System.
-Macros recognized by the type \kw{Macro}. - -Bidirectional type checking. +Elaboration performs the following tasks: +\begin{itemize} +\item Finish the syntactic analysis: distinguish the function calls, from + the macro calls, from the \id{let} definitions, from the \id{case} + analyses, ... +\item Infer and verify the types. +\item Macro expand the macro calls. +\end{itemize}
-Lexically scoped macros. +This is the heart of Typer's front-end and requires a fair bit of supporting +functionality: type inference needs to be able to normalize and compare +arbitrary \id{Lexp} terms; while macro expansion requires evaluation of +arbitrary Typer code, either via a small interpreter, or via the complete +compiler and runtime system.
-No notion of phase level for bindings~\cite{Flatt02}: instead, when a macro -call is found, the macro definition needs to be \emph{closed} and its -dependencies are all evaluated. Since Typer is pure, these evaluations have -no effect and their results can be cached. +\subsection{Type checking}
-Interleaved type checking/inference and macro expansion. +We use a bidirectional type checking~\cite{Pierce00} approach to minimize +the required type annotations. So elaboration is split into two mutually +recursive functions: +\begin{itemize} +\item \id{infer} takes a type environment and an \id{Sexp} and returns the + corresponding elaborated \id{Lexp} along with its type (also an \id{Lexp}). +\item \id{check} takes a type environment, an \id{Sexp}, and its expected + type (an \id{Lexp}), and returns the elaborated form, of type \id{Lexp}. +\end{itemize} +The type environment carries the type of every variable in scope, of course, +but it also carries the definition of all the variable in scope which were +defined via a \id{let} binding. + +The complete presentation of the type checker is out of scope of this +article, but we will simply sketch the way function calls are handled: +\begin{enumerate} +\item When a function call is encountered, \id{infer} is called on the + function part. +\item The returned type is verified to be that of a function and then split + into the argument type and the return type. +\item Then \id{check} is called on the argument since we now know its + expected type. +\item Finally we can construct the \id{app} node and return it along with + its type. +\end{enumerate} + +\subsection{Final syntactic analysis} + +Elaboration has to distinguish the different constructs of the language. +In the case of Typer, just as is the case of Scheme, we do it without hard +coding the meaning of any identifier. More specifically, neither \id{check} +nor \id{infer} will check to see if an \id{Sexp} \id{node} has a symbol such +as \id{let_in_} as its head. Instead, when those functions encounter +a \id{node}, they do the following: +\begin{enumerate} +\item Call \id{infer} on its head. +\item If the returned type is \id{Special-Form}, make sure the expression is + a primitive (i.e. of the form \id{prim}), and if so, call the + corresponding special form's elaboration function, found in a global + table. There is a special form for each core syntactic construct, such as + \kw{let} and \kw{case}. +\item If the returned type is \id{Macro}, then it is a macro call, and we + expand it, as detailed below. +\item Otherwise, it should be a function call so we do as outlined above. +\end{enumerate} + +Note that at step 2 above, we have to double check that the head is +a \id{prim}, because in case of a source code such as +``\verb|(if x then let_in_ else case_) 42|'' the head is a valid +expression of type \id{Special-Form} but is not a primitive, so we have to +reject such meaningless code. + +So the way keywords like \id{let_in_} get their special meaning is simply +by binding them to the corresponding special form primitive in the initial +environment. The programmer is free to rebind those identifiers if she +wants, or to bind the corresponding primitive to other identifiers. + +\subsection{Macro expansion} + +As explained above, a macro call is recognized simply by the fact that the +head of the \id{node} has type \id{Macro}. As before with +\id{Special-Form}, the mere fact that the head has type \id{Macro} does not +guarantee that this is a valid macro. Again we may just be looking at +a source code of the form: +\begin{verbatim} + (if x then mymacro else yourmacro) 42 +\end{verbatim} +where the head may be a valid expression of type \id{Macro} but is not +really a macro because $x$ will only be known at runtime. So, to make sure +we do have a macro, we additionally need to verify that the head expression +is \emph{closed}: it can refer to let-bound variables, as long as these are +themselves \emph{closed}, but it cannot refer to a function's formal +argument. Once established that it is closed, we can just evaluate it to +a value, along with all the let-bound variables to which it refers. +Since Typer is pure, these evaluations have no visible side-effects and +their results can be cached. + +This \emph{closedness} criterion along with the opportunistic evaluation of +required definitions lets us avoid the complexity of a notion such as +binding phase levels used in Scheme~\cite{Flatt02}. It also naturally +supports lexically scoped macros or even higher-order macros. + +The downside of this approach is that it needs to know the type of the head +of a \id{node} to detect a macro call. This makes it virtually impossible +to expand all macros in a separate phase before we infer types. And we +can't infer types before we have expanded the macros either, so we are +forced to interleave macro expansion and type inference within one big +elaboration phase. While this is a significant downside, performing macro +expansion from inside the type inference phase has the advantage that macros +can get access to the complete type environment as well as to the expected +return type of the code it should construct. + +In the context of macros that provide syntax extensions, this is often of +little benefit, but it is crucial for macros that act as \emph{proof + tactics}, where the type environment represents the set of valid +hypotheses, and the expected return type is the proposition one wants to +prove. While Typer is a programming language at heart, its core language is +very similar to that of proof assistants such as Coq, so it can also be used +to write and manipulate propositions and proofs.
-Access to the typing environment and expected type of returned code -(i.e. hypotheses and goal, when seen from the point of view of a proof -assistant).
\section{Related work} \label{sec:related}
===================================== refs.bib ===================================== --- a/refs.bib +++ b/refs.bib @@ -173,6 +173,7 @@ toiti @string{Paterson= { Ross Paterson }} @string{Pfenning= { Frank Pfenning }} @string{Pientka = { Brigitte Pientka }} +@string{Pierce = { Benjamin C. Pierce }} @string{Piessens= { Frank Piessens }} @string{Plotkin = { Gordon Plotkin }} @string{Pym = { David J. Pym }} @@ -203,6 +204,7 @@ toiti @string{Thiemann= { Peter Thiemann }} @string{Tofte = { Mads Tofte }} @string{Trifonov= { Valery Trifonov }} +@string{Turner = { David M. Turner }} @string{Urban = { Christian Urban }} @string{Vanderwaart={ Joseph C. Vanderwaart }} @string{VanHorn = { Van Horn, David }} @@ -5097,6 +5099,31 @@ toiti examples which illustrate these features.} }
+@Article{Pierce00, + author = Pierce #{and}# Turner, + title = {Local type inference}, + journal = TOPLAS, + year = 2000, + volume = 22, + number = 1, + pages = {1-44}, + month = {jan}, + url = {http://dl.acm.org/citation.cfm?id=345100%7D, + abstract = {We study two partial type inference methods for a language + combining subtyping and impredicative polymorphism. + Both methods are local in the sense that missing + annotations are recovered using only information from + adjacent nodes in the syntax tree, without long-distance + constraints such as unification variables. One method + infers type arguments in polymorphic applications + using a local constraint solver. The other infers + annotations on bound variables in function abstractions by + propagating type constraints downward from enclosing + application nodes. We motivate our design choices + by a statistical analysis of the uses of type inference + in a sizable body of existing ML code.} +} + @InProceedings{Pirinen98, author = {Pekka P. Pirinen}, title = {Barrier Techniques for Incremental Tracing}, @@ -6149,7 +6176,7 @@ toiti pages = {347-359} } @InProceedings{Wadler95, - author = {D. N. Turner and } # Wadler, + author = Turner #{and}# Wadler, title = {Once Upon a Type}, crossref = {FPCA95}, pages = {1-11}
View it on GitLab: https://gitlab.com/monnier/typer/commit/dd6ff23da94ffde77026d732ffa3c99459fb...