Stefan pushed to branch report/jfla-2019 at Stefan / Typer
Commits:
e8ea9304 by Stefan Monnier at 2019-01-10T22:12:42Z
Final version
- - - - -
3 changed files:
- + GNUmakefile
- paper.tex
- + refs.bib
Changes:
=====================================
GNUmakefile
=====================================
@@ -0,0 +1,13 @@
+GLOBALREFS=~/share/misc/all
+LOCALREFS=refs.bib
+PAPER=paper
+
+# Apparently other options are "bibextract/citetags/citefind" and "bibtool"
+${LOCALREFS}: ${PAPER}.aux ${GLOBALREFS}.bib GNUmakefile
+ @# The -r option breaks with names with slashes :-( !
+ @# bibexport -r ${GLOBALREFS} -o $@ ${PAPER}.aux
+ @sed 's|^\\bibdata{.*}|\\bibdata{${GLOBALREFS}}|' <${PAPER}.aux >${PAPER}.tmp.aux
+ bibexport -o $@ ${PAPER}.tmp.aux
+ @rm -f ${PAPER}.tmp.aux $@-save-[1-9]*
+
+
=====================================
paper.tex
=====================================
@@ -60,24 +60,27 @@
%% \date{\today}
\begin{abstract}
-We present the language Typer which is a programming language in the ML
-family. Its name is an homage to Scheme(r) with which it shares the
-design of a minimal core language combined with powerful metaprogramming
+We present the language Typer which is a programming language in the
+ML/Haskell family. Its name is an homage to Scheme(r) with which it shares
+the design of a minimal core language combined with powerful metaprogramming
facilities, pushing as much functionality as possible into libraries.
-Contrary to Scheme, its syntax includes traditional infix notation, and
-its core language is very much statically typed. More specifically the
-core language is a variant of the implicit calculus of constructions
-(ICC). We present the main elements of the language, including its
-Lisp-style syntactic structure, its elaboration phase which combines
-macro-expansion and Hindley-Milner type inference, its treatment of
-implicit arguments, and its novel approach to impredicativity.
+Contrary to Scheme, its syntax includes traditional infix notation, and its
+core language is very much statically typed. More specifically the core
+language is a variant of the implicit calculus of constructions (ICC).
+We present the main elements of the language, including its Lisp-style
+syntactic structure, its elaboration phase which combines macro-expansion
+and Hindley-Milner type inference, and its treatment of implicit
+arguments. %% and its novel approach to impredicativity.
\end{abstract}
+%% \tableofcontents
+
\section{Introduction}
Typer is an experimental programming language born from a desire to have
-a programming language with a type system as powerful as that of Coq but
-with a meta-programming system like those from the Lisp family.
+a programming language that is as convenient as OCaml for everyday tasks,
+but with a type system as powerful as that of Coq and with
+a meta-programming system like those from the Lisp family.
While Scheme with its dynamic typing may appear diametrically opposed to
systems like Coq where the static typing can be extremely rigid, they both
@@ -86,32 +89,31 @@ shares some of Scheme's design, with a minimal but very powerful language at
its core (Gallina) and layers of meta-programming added on top to make it
convenient for the programmer.
%%
-Yet, Coq's meta-programming does not follow the same minimalist approach,
-resulting in extra complexity: the meta-programs are split into tactics
-written in Ltac and proof scripts that call those different tactics.
-As a first approximation, those correspond respectively to Lisp macros and
-macro calls, except they don't reuse the same language and syntax
+Yet, Coq's meta-programming does not follow the same minimalist approach:
+the meta-programs are split into tactics written in the Ltac or OCaml
+language and proof scripts, also written in Ltac, that call those different
+tactics. As a first approximation, those correspond respectively to Lisp
+macros and macro calls, except they don't reuse the same language and syntax
as Gallina.
Typer starts with a core language similar to that of Coq, but combines it
with a macro facility where macros are written directly in and called
directly from Typer, so the language can be seamlessly extended via
-meta-programming, just as in Lisp. Part of this is made possible by the use
-of a very primitive parsing technique, which is just flexible enough to
-support a fairly familiar infix syntax, yet simple enough that it maps
-straightforwardly to the equivalent of Lisp's S-expressions.
+meta-programming as in Lisp. The syntactic extensibility is made
+possible by the use of a very primitive parsing technique, which is just
+flexible enough to support a fairly familiar infix syntax, yet simple enough
+that it maps straightforwardly to the equivalent of Lisp's S-expressions.
While the core language lets us manipulate proofs, Typer is mostly meant to
be used as a programming language. So we wanted the power of fully
dependent types to not unduly get in the way of programs that do not
make use of them. Concretely, we tried to design Typer in such a way that
programs can be written with just as few extra annotations as in any other
-ML-family language.
+ML/Haskell-family language.
-Finally, while the core of Typer is also a variant of the calculus of
-inductive constructions~\cite{Paulin93}, it is different from Coq's pCIC in
-various important ways, most notably in that it eschews the Prop universe,
-replacing it with impredicative erasable arguments.
+%% Finally, while the core of Typer is also a variant of the calculus of
+%% inductive constructions~\cite{Paulin93}, it is different from Coq's pCIC in
+%% its treatment of impredicativity, which it only allows for erasable arguments.
The paper presents the following contributions of the design of Typer:
\begin{itemize}
@@ -121,8 +123,8 @@ The paper presents the following contributions of the design of Typer:
\item An elaboration phase which combines HM-style type inference and
macro-expansion, relying on the inferred type information to distinguish
macro calls.
-\item An extension of ICC*~\cite{Barras08} with inductive types
- and a new rule for impredicativity.
+\item An extension of ICC*~\cite{Barras08} with inductive types.
+ %% and a new rule for impredicativity.
\end{itemize}
%% \begin{itemize}
@@ -143,33 +145,36 @@ The paper presents the following contributions of the design of Typer:
\section{Typer primer}
\label{sec:primer}
-Before getting to Typer's internals, we'll give a short overview of what
-the language looks like.
-To a first approximation Typer is very similar to other languages in the ML
-family. It is a statically typed (pure) functional language, with basically
-two core elements: functions and datatypes. To define a function which adds
-1 to its argument, you can write:
+Before getting to Typer's internals, we'll give a short overview of what the
+language looks like. To a first approximation Typer is very similar to
+other languages in the ML/Haskell family. It is a statically typed (pure)
+functional language, with basically two core elements: functions and
+datatypes. To define a function which adds 1 to its argument, you can
+write:
%%
\begin{verbatim}
add1 : Int -> Int;
add1 = lambda x -> x + 1;
\end{verbatim}
%%
-Like in Agda~\cite{Bove09}, the type of dependently typed functions is
-written ``\texttt{(x :~$\tau_1$) -> $\tau_2$}''. The definition above
-could have used a bit of a syntactic sugar to become:
+The definition above could have used a bit of a syntactic sugar to become:
\begin{verbatim}
add1 x = x + 1;
\end{verbatim}
-To define a new datatype to represent singly linked lists you can write:
+%% Like in Agda~\cite{Bove09}, the type of dependently typed functions is
+%% written ``\texttt{(x :~$\tau_1$) -> $\tau_2$}''.
+To define a new datatype to represent singly linked lists we can write:
\begin{verbatim}
type List (a : Type)
| nil
| cons (hd : a) (tl : List a);
\end{verbatim}
%%
-where \id{hd} and \id{tl} are optional field names: we could have written just
-\verb+cons a (List a)+ instead.
+where \id{hd} and \id{tl} are optional field names: we could have written
+just \verb+cons a (List a)+ instead. This \id{type} syntax is actually
+syntactic sugar provided by a predefined macro which rewrites the above to
+something of the form \id{List = typecons ...} where \id{typecons} is the
+low-level construct to define inductive types.
%%
Functions and data constructors are curried. You can define the
\id{map} function as follows:
@@ -197,7 +202,7 @@ This allows you to introduce new locally scoped definitions. The shape
of this construct is
``$\texttt{let}~\id{decls}~\texttt{in}~\id{exp}$'' where \id{decls} is
a sequence of declarations such as the ones shown above, separated by
-semicolons. For example, we could have defined the above map function
+semicolons. For example, we could have defined the above map function
as follows:
\begin{Verbatim}[samepage=true]
map f =
@@ -221,44 +226,60 @@ separated by semicolons.
%% a purely functional language.
If these are all the constructs, you might wonder how macros are defined.
-They're defined simply as values with a dedicated type \id{Macro}:
-\begin{verbatim}
- if_then_else_ : Macro;
- if_then_else_ = macro ifthen;
-\end{verbatim}
-where \id{macro} is the constructor of the \id{Macro} type and \id{ifthen}
-is the function which performs the expansion. Ignoring the types, this is
-very similar to how it is done in Emacs Lisp.
-%% FIXME: Maybe discuss the fact that we receive a *List* of Sexp rather
-%% than a single Sexp as argument?
-The \id{ifthen} function
-could be defined as follows:
+They're defined simply as values with a dedicated type \id{Macro}, which can
+be constructed with the \id{macro} constructor. For example, the following
+code defines a new macro \id{declare\_is\_within\_}:
\begin{verbatim}
- ifthen : List Sexp -> ME Sexp;
- ifthen args =
+ diw : List Sexp -> ME Sexp;
+ diw args =
let e1 = nth 0 args error_sexp;
e2 = nth 1 args error_sexp;
- e3 = nth 2 args error_sexp;
- code = (quote
- (case (uquote e1)
- | true => (uquote e2)
- | false => (uquote e3)))
- in return code;
+ e3 = nth 2 args error_sexp
+ in return (quote
+ ((lambda (uquote e1) -> (uquote e3))
+ (uquote e2)));
+
+ declare_is_within_ : Macro;
+ declare_is_within_ = macro diw;
\end{verbatim}
-where \id{quote} is a macro similar to the backquote/quasiquote in Lisp
-macros (and \id{uquote} corresponds to the comma in those systems).
+\id{quote} is a macro similar to the backquote/quasiquote in Lisp
+macros (and \id{uquote} corresponds to the ``unquoting'' comma in those
+systems).
%%
-Wherever the above macro is in scope, the programmer can write:
+Wherever the above macro definition is in scope, the following:
%%
\begin{verbatim}
- ... if_then_else_ x "x is true" "x is false" ...
+ ... declare_is_within_ x 2 (x + 4) ...
+\end{verbatim}
+will be taken as a macro call which expands to:
+\begin{verbatim}
+ ... ((lambda x -> x + 4) 2) ...
\end{verbatim}
%%
+The intention is likely to let the programmer call the macro with the
+syntax ``\kw{declare} $e_1$ \kw{is} $e_2$ \kw{within} $e_3$'', but that
+requires a change to the grammar, which we show in the next section.
+
Being purely functional, Typer resorts to the usual monadic technique to get
access to a side effecting world, just as is done in Haskell. In the above
-code, \id{ME} is the macro-expansion monad, used for the same purpose as the
-one in Template Haskell~\cite{Sheard02}, and \id{return} is the unit of
-that monad.
+code, \id{ME} is the macro-expansion monad (like that of Template
+Haskell~\cite{Sheard02}) and \id{return} is the unit of that monad. This is
+used to allow side effects during macro expansion, such as emitting
+warnings, generating fresh variable names, and reading files.
+
+Some aspects of Typer's base language are purposefully minimalist, leaving
+it to macros to provide a more convenient surface language. For example,
+the above definition of \id{declare\_is\_within\_} can be defined using the
+\kw{define-macro} macro:
+\begin{verbatim}
+ define-macro (declare_is_within_ e1 e2 e3)
+ return (quote
+ ((lambda (uquote e1) -> (uquote e3))
+ (uquote e2)));
+\end{verbatim}
+As a sad note, these macros are currently not hygienic and the programmer
+needs to rely on \id{gensym} to avoid variable capture. This will hopefully
+be addressed soon.
\section{Syntactic structure}
\label{sec:syntax}
@@ -279,22 +300,27 @@ that monad.
\end{figure}
}
-Once lexical analysis is performed, rather than performing the syntactic
-analysis in one step, Typer further subdivides the syntactic analysis phase
-into two steps. The first step does a rudimentary analysis that only
-extracts a generic tree structure, called S-expression. The shape of
-S-expressions could be described with the datatype shown in
+Typer's parser is divided into a \emph{reader} which turns the source code
+into a generic tree structure called S-expression, and an \emph{elaborator}
+which handles scoping, macro expansion, and type inference.
+The \emph{reader} handles the lexical analysis, which requires all tokens to
+be separated by whitespace, except for a few single-char tokens, such as
+\Char{(}, \Char{;}, and a few more.
+
+After lexical analysis is performed, the reader parses the stream of tokens
+according to a simple grammar to extracts the S-expression. The shape of
+S-expressions can be described with the datatype shown in
Figure~\ref{fig:Typer-Sexp}. Note that contrary to the Lisp S-expression
syntax, parentheses are only used for grouping purposes, so \texttt{(x)}
-will produce the same \id{Sexp} as just \texttt{x}. And we use
-\texttt{()} as the printed representation of the zero-length \id{symbol}
-(which we call \emph{espilon}).
+will produce the same \id{Sexp} as just \texttt{x}. And we use \texttt{()}
+as the printed representation of the zero-length \id{symbol} (which we call
+\emph{espilon}).
\FigTyperSexp
Note how, at this stage, the representation of the code has no notion of
-bindings, functions, types, or function calls. It's only in a second step
-that S-expressions are analyzed to distinguish the various constructs such
+scoping, functions, types, or function calls. It's only during elaboration
+that S-expressions will be analyzed to distinguish the various constructs such
as macro calls, function calls, \kw{let} bindings, variable references, etc.
Any S-expression written using an infix or mixfix operator can also be
@@ -307,36 +333,47 @@ the user can write
\begin{verbatim}
let_in_ (_=_ x (_+_ (_*_ a b) c)) x
\end{verbatim}
-and these two notations result in identical S-expressions.
+and these two notations result in identical S-expressions. Note that
+contrary to ``\kw{let}'', ``='', and ``\kw{in}'' which are keywords,
+\id{let\_in\_} is a normal identifier, with no special meaning in the
+\emph{reader}'s grammar, so the following:
+\begin{verbatim}
+ let_in_ g x = f x + a
+\end{verbatim}
+is read in the same way as:
+\begin{verbatim}
+ _=_ (let_in_ g x) (_+_ (f x) a)
+\end{verbatim}
\subsection{Operator precedence grammar}
Typer's external notion of S-expression is more flexible than Lisp's, since
-it allows infix notation. It relies on operator precedence grammars
-(OPG)~\cite{Floyd63} for that. An OPG is a very restrictive subset of
-context free grammars, much more restrictive than LALR, for example.
+it allows infix notation. The \emph{reader} relies on operator precedence
+grammars (OPG)~\cite{Floyd63} for that. An OPG is a very restrictive subset
+of context free grammars, much more restrictive than LALR, for example.
You can think of the job of an OPG parser from the point of view of someone
trying to add parentheses to render the document's structure explicit:
whenever the parser sees something of the form ``$\id{kw}_1~e~\id{kw}_2$''
-(where $\id{kw}_1$ and $\id{kw}_2$ are keywords and $e$ is a sequence of
+(where $\id{kw}_1$ and $\id{kw}_2$ are keywords and $e$ is a possibly empty
+sequence of
non-keyword tokens or fully parenthesized sub-trees), it just needs to
decide whether that should be parenthesized as ``$\id{kw}_1~(e~\id{kw}_2$''
or ``$\id{kw}_1~e)~\id{kw}_2$''. For example, when starting with:
\begin{verbatim}
... g + f(5) * 6 - x ...
\end{verbatim}
-The parser will look at ``\texttt{+ f(5) *}'' and add an open paren because
+The parser will look at ``\texttt{+ f(5) *}'' and add an open parenthesis because
it decides that the ``\texttt{f(5)}'' should be attached to the
``\texttt{*}'' keyword:
\begin{verbatim}
... g + (f(5) * 6 - x ...
\end{verbatim}
-then it will see ``\texttt{* 6 -}'' and add a close paren this time:
+then it will see ``\texttt{* 6 -}'' and add a close parenthesis this time:
\begin{verbatim}
... g + (f(5) * 6) - x ...
\end{verbatim}
-Then it will consider ``\texttt{+ (f(5) * 6) -}'' and add an open paren:
+Then it will consider ``\texttt{+ (f(5) * 6) -}'' and add an open parenthesis:
\begin{verbatim}
... g + ((f(5) * 6) - x ...
\end{verbatim}
@@ -344,38 +381,50 @@ and so on and so forth. What sets OPG apart here is that it makes these
choices without considering $e$ nor the surrounding context: it
bases its decision only on the pair of keywords.
-In Typer, the grammar is represented by simply associating to each keyword
-two precedence levels: one for its left side and another for its right side.
-Then parsing uses the following rule: when we see
-``$\id{kw}_1~e~\id{kw}_2$'', we lookup the right precedence of $\id{kw}_1$
-and the left precedence of $\id{kw}_2$, and we then attach $e$ to
-whichever is higher. If the precedences are equal, then we consider those
-two keywords as part of a mixfix.
-
-For example, given the default grammar, we can define the new form
-``$\kw{if}~e_1~\kw{then}~e_2~\kw{else}~e_3$'' by setting the precedences as
-follows;
+In Typer, the grammar is represented by a table which lists every token that
+should be considered as a keyword, along with its two precedence levels: one
+for its left side and another for its right side. Then parsing uses the
+following rule: when we see ``$\id{kw}_1~e~\id{kw}_2$'', we lookup the right
+precedence of $\id{kw}_1$ and the left precedence of $\id{kw}_2$, and we
+then attach $e$ to whichever is higher. If the precedences are equal, then
+we consider those keywords as part of a mixfix whose name it constructs by
+concatenating them as $\id{kw}_1\_\id{kw}_2$.
+
+For example, given the default grammar, we can define the new syntax
+``$\kw{declare}~e_1~\kw{is}~e_2~\kw{within}~e_3$'' by setting the
+precedences as follows:
\begin{verbatim}
- define-operator "if" () 2;
- define-operator "then" 2 1;
- define-operator "else" 1 66;
+ define-operator "declare" () 2;
+ define-operator "is" 2 1;
+ define-operator "within" 1 66;
\end{verbatim}
-After which such a form gets parsed identically to
-``$\id{if\_then\_else\_}~e_1~e_2~e_3$''. Note that the modification of the
-grammar is independent from the definition of \id{if\_then\_else\_} as
-a macro: the grammar can be changed for infix functions and new macros
-can be defined without changing the grammar.
-
-While it enjoys a simple and efficient implementation\footnote{as well as
- some other interesting properties such as the ability to parse backward.}
+After that, such a form gets parsed identically to
+``$\id{declare\_is\_within\_}~e_1~e_2~e_3$''. Note that the modification of
+the grammar is independent from the definition of \id{declare\_is\_within\_}
+as a macro: the grammar changes only affect the parsing of source code into
+an S-expression. The resulting S-expression can then correspond to a macro
+call (as here), but it can just as well correspond to a function call (as is
+the case typically for infix mathematical operators like ``$+$''), or it may
+even result in an invalid chunk of code (for example if the above operator
+definitions are used without any \id{declare\_is\_within\_} in scope).
+
+Clearly, hardcoding levels of precedence as in the example above is messy,
+but this is only the low-level interface provided by the \emph{reader}.
+It can be supplemented with macros that provide a more flexible interface,
+for example letting the programmer specify precedence levels relative to
+existing ones, or using fragments of BNF grammar.
+
+While it enjoys a simple and efficient implementation%% \footnote{as well as
+ %% some other interesting properties such as the ability to parse backward.}
the motivation behind the choice of an OPG parser was not efficiency but
rather the following aspects:
\begin{itemize}
\item Extensible grammars suffer from an inherent lack of modularity, since
the combination of two extensions can always lead to conflicts or
- ambiguities, sometimes in ways that are very difficult to understand (as
- anyone who had to fix a reduce/reduce conflict in an LALR parser can
- attest). While OPG's simplicity means that conflicts are more frequent,
+ ambiguities, sometimes in ways that can be difficult to understand.
+ %% (as anyone who had to fix a reduce/reduce conflict in an LALR parser can
+ %% attest)
+ While OPG's simplicity means that conflicts are more frequent,
they also tend to be much more superficial and hence easier to understand.
\item More importantly, OPG grammars are ``strongly context free'': a given
stream of tokens will be parsed in the same way regardless of the
@@ -417,33 +466,35 @@ is that macro's definition.
\begin{figure}
\begin{displaymath}
\MAlign{
+ \id{Ltype}=\id{Lexp} \\
\kw{type}~\id{Lexp} \\
\hspace{5pt}\begin{array}{@{|~}l@{~}l}
- \id{var} & \id{Id} \\
- \id{app} & (f : \id{Lexp})~(\id{arg} : \id{Lexp}) \\
- \id{fun} & (\id{arg} : \id{Id})~(\id{atype} : \id{Lexp})
+ \id{var} & (\id{name} : \id{Id})~(\id{pos} : \id{Int}) \\
+ \id{imm} & (\id{value} : \id{Limmediate}) \\
+ \id{prim}& (\id{id} : \id{String}); \\
+ \id{fix} & (\id{bindings} : \id{List}~\id{Lbinding})
~(\id{body} : \id{Lexp}) \\
- \id{arw} & (\id{arg} : \id{Id})~(\id{atype} : \id{Lexp})
- ~(\id{rtype} : \id{Lexp}) \\
- \id{let} & (\id{var} : \id{Id})~(\id{val} : \id{Lexp})
+ \id{arw} & (\id{arg} : \id{Id})~(\id{atype} : \id{Ltype})
+ ~(\id{rtype} : \id{Ltype}) \\
+ \id{fun} & (\id{arg} : \id{Id})~(\id{atype} : \id{Ltype})
~(\id{body} : \id{Lexp}) \\
+ \id{app} & (f : \id{Lexp})~(\id{arg} : \id{Lexp}) \\
+ \id{ind} & (\id{params} : \id{List}~\id{Id})
+ ~(\id{cases} : \id{List}~\id{LindCase}) \\
+ \id{con} & (\id{typ} : \id{Ltype})~(\id{name} : \id{Id}) \\
\id{case}& (\id{val} : \id{Lexp})
~(\id{cases} : \id{List}~\id{Lbranch}) \\
- \id{con} & (\id{adt} : \id{Lexp})~(\id{name} : \id{Id}) \\
- \id{adt} & (\id{params} : \id{List}~\id{Id})
- ~(\id{cases} : \id{List}~\id{LadtCase}) \\
- \id{prim}& (\id{id} : \id{String}); \\
- \end{array} \medskip \\
- \kw{type}~\id{LadtCase} \\
- \hspace{5pt}\begin{array}{@{|~}l@{~}l}
- \id{adtcase} & (\id{name} : \id{Id})~(\id{fields} :
- \id{List}~\id{Lexp});
- \end{array} \medskip \\
- \kw{type}~\id{Lbranch} \\
- \hspace{5pt}\begin{array}{@{|~}l@{~}l}
- \id{branch} & (\id{pattern} : \id{List}~\id{Id})
- ~(\id{body} : \id{Lexp});
- \end{array}
+ \end{array} %% \medskip \\
+ %% \kw{type}~\id{LindCase} \\
+ %% \hspace{5pt}\begin{array}{@{|~}l@{~}l}
+ %% \id{indcase} & (\id{name} : \id{Id})~(\id{fields} :
+ %% \id{List}~\id{Ltype});
+ %% \end{array} \medskip \\
+ %% \kw{type}~\id{Lbranch} \\
+ %% \hspace{5pt}\begin{array}{@{|~}l@{~}l}
+ %% \id{branch} & (\id{pattern} : \id{List}~\id{Id})
+ %% ~(\id{body} : \id{Lexp});
+ %% \end{array}
}
\end{displaymath}
\caption{Sketch of the definition of core $\lambda$-expressions}
@@ -458,16 +509,32 @@ is that macro's definition.
Elaboration is the phase in Typer's compiler which turns an S-expression
into an expression in Typer's core $\lambda$-calculus. We want most of this phase
to be itself implemented in Typer so that we can prove properties such as
-the correctness of the compilation of pattern matching~\cite{Cockx18}.
-
-Figure~\ref{fig:Lexp} shows the (simplified) representation used internally
-for that calculus. Notice that \id{Lexp} represents both what is usually
+the correctness of the compilation of pattern
+matching~\cite{Cockx18,Christiansen16}.
+
+Figure~\ref{fig:Lexp} shows a sketch of the definition of the datatype
+\id{Lexp} to represent core expressions and \id{Ltype} to represent their
+types. Notice that the there is really only one datatype used for both: the
+name \id{Ltype} is only used as a hint that this expression is expected to
+be a type, since \id{Lexp} is used both to represent what is usually
considered \emph{expressions} (such as \id{let}, \id{fun}, ...) as well as
what is usually considered as \emph{types} (e.g.~$\id{arw}~x~t_1~t_2$ which
-represents the function type \texttt{(x~:~t$_1$) -> t$_2$}, or \id{adt}
-which is the representation of an abstract data type) since this core
+represents the function type \texttt{(x~:~t$_1$) -> t$_2$}, or \id{ind}
+which is the representation of an inductive data type) since this core
language is a kind of Pure Type System (PTS)~\cite{Barendregt91b}.
+The different constructors of \id{Lexp} are as follows: \id{var} is
+a variable reference represented as a De Bruijn index together with the
+original name of the variable, used for error messages and debugging
+purposes; \id{imm} represents an immediate value such as a constant string
+or integer; \id{prim} represents built-in primitives such as the string
+concatenation and the string type; \id{fix} represents a set of local
+bindings which can be mutually recursive; \id{arw}, \id{fun}, and \id{app}
+represent respectively, the type of a function, a function, and a function
+call; finally \id{ind} represent an inductive data type, \id{con} represent
+a particular constructor of an inductive data type, and \id{case} represents
+the eliminator by case analysis.
+
Elaboration performs the following tasks:
\begin{itemize}
\item Finish the syntactic analysis: decide what is a function call,
@@ -514,12 +581,12 @@ runtime system.
}
Elaboration has to distinguish the different constructs of the language.
-Figure~\ref{fig:elaborate} shows a pseudocode of how it works (where
-\id{Ltype} is actually an alias for \id{Lexp}). In the case of Typer, just
-as is the case of Scheme, we do it without hard coding the meaning of any
-identifier. More specifically, elaboration will not check to see if an
-\id{Sexp} \id{node} has a symbol named for example \id{let\_in\_} as its
-head. Instead, when it encounters a \id{node}, it does the following:
+Figure~\ref{fig:elaborate} shows a pseudocode of how it works. In the case
+of Typer, just as is the case of Scheme, we do it without hard coding the
+meaning of any identifier. More specifically, elaboration will not check to
+see if an \id{Sexp} \id{node} has a symbol named for example \id{let\_in\_}
+as its head. Instead, when it encounters a \id{node}, it does the
+following:
\begin{enumerate}
\item Elaborate the head, which will also return the inferred type of
that expression.
@@ -562,14 +629,15 @@ a source code of the form:
where the head may be a valid expression of type \id{Macro} but is not
really a macro we can expand because $x$ will only be known at runtime. So,
to make sure we do have a macro, we additionally need to verify that the
-head expression is \emph{closed}: it can refer to let-bound variables, as
-long as these are themselves \emph{closed}, but it cannot refer to
-a function's formal argument. Once established that it is closed, we can
-reduce it (by regular evaluation) to a value out of which we can finally
-extract the Typer function to call to perform the expansion. Since Typer is
-pure, the evaluation of the head has no visible side-effects and its result
-can be cached (contrary to the macroexpansion itself, which is done in
-a monad that can perform arbitrary side-effects).
+head expression is \emph{let-closed}. We say that a term is
+\emph{let-closed} if all its free variables are bound by \id{fix} and that
+their definitions are themselves \emph{let-closed}. Once established that
+it is \emph{let-closed}, we can reduce it (by regular evaluation) to a value
+out of which we can finally extract the Typer function to call to perform
+the expansion. Since Typer is pure, the evaluation of the head has no
+visible side-effects and its result can be cached (contrary to the
+macroexpansion itself, which is done in the \id{ME} monad which is allowed
+to perform arbitrary side-effects).
%% \TODO{Drop this}
%% This \emph{closedness} criterion along with the opportunistic evaluation of
@@ -583,31 +651,41 @@ And we can't infer types before we have expanded the macros either, so we
are forced to interleave macro expansion and type inference within one big
elaboration phase. While this can be a significant downside, performing macro
expansion from inside the type inference phase has the advantage that macros
-can get access to the complete type context.
+can get access to the complete type context and the expected return type.
-In the context of macros that provide syntax extensions, this is often of
-little benefit, but it makes it possible to write other kinds of macros
-which act more like \emph{proof tactics}, where the type environment
-represents the set of valid hypotheses, and the expected return type is the
-proposition one wants to prove. While Typer is a programming language at
-heart, its core language is very similar to that of proof assistants, so it
-can also be used to write and manipulate propositions and proofs.
+In the context of macros that provide syntax extensions, this is currently
+of little benefit, but we intend to make it possible in the future to write
+other kinds of macros which act more like \emph{proof tactics}, where the
+type environment represents the set of valid hypotheses, and the expected
+return type is the proposition one wants to prove. While Typer is
+a programming language at heart, its core language is very similar to that
+of proof assistants, so it can also be used to write and manipulate
+propositions and proofs.
\subsection{Type checking}
-Typer uses a bidirectional type checking~\cite{Pierce00} approach to
-minimize the required type annotations. Usually, this means that
-elaboration is split into two mutually recursive functions:
+Bidirectional type checking~\cite{Pierce00}, compared to the previously
+traditional way to check types, is a more principled method of checking
+types that propagates type information more effectively while keeping the
+type checking code simple, if not even simpler. Type checking of the
+various language constructs is split into two mutually recursive functions:
+%%
+%% Typer uses a bidirectional type checking approach to
+%% minimize the required type annotations: bidirectional type checking lets
+%% existing type information propagate to many .
+%%
\begin{itemize}
\item \id{infer} takes a type environment and an \id{Sexp} and returns the
- corresponding elaborated \id{Lexp} along with its type (also an \id{Lexp}).
+ corresponding elaborated \id{Lexp} along with its \id{Ltype}. If it is
+ called on a construct it does not handle, it means type checking fails for
+ lack of type annotation.
\item \id{check} takes a type environment, an \id{Sexp}, and its expected
- type (an \id{Lexp}), and returns the elaborated form, of type \id{Lexp}.
+ type (an \id{Ltype}), and returns the elaborated form, of type \id{Lexp}.
+ If it is called on a construct it does not handle, it delegates to
+ \id{infer} and then checks equality with the inferred type (which is the
+ only place where types are compared).
\end{itemize}
-And each language construct is either handled in \id{infer} or in \id{check}
-(so \id{check} defers to \id{infer} when faced with a construct it does not
-know how to handle). For example, typically function calls are handled in
-\id{infer} as follows:
+For example, typically function calls are handled in \id{infer} as follows:
\begin{enumerate}
\item When a function call is encountered, \id{infer} is called on the
function part.
@@ -618,6 +696,14 @@ know how to handle). For example, typically function calls are handled in
\item Finally we can construct the \id{app} node and return it along with
its type.
\end{enumerate}
+The previously traditional way to check types basically only relied on
+\id{infer}, so the advantage of bidirectional type checking is in \id{check}
+which both lets us confine the place where type equality needs to be
+checked, and lets us propagate type information available from the context.
+E.g. traditionally functions are handled by \id{check} which makes it
+unnecessary to annotate the formal argument with its type when the context
+already gives this information.
+
In our case, this division of labor would result in too much code
duplication since both \id{infer} and \id{check} would need to look for
special forms and macro calls, so we want to have just a single
@@ -630,49 +716,69 @@ to what \id{infer} would need, and changed it to also cover the needs of
: \id{Ctx} \to \id{Sexp} \to \id{Maybe}~\id{Ltype}
\to \id{Pair}~\id{Lexp}~\id{Ltype};
\end{displaymath}
-We can then trivially define \id{check} and \id{infer} on top of it:
-\begin{displaymath}
- \MAlign{
- \id{infer}~c~s = \id{elaborate}~c~s~\id{nothing}; \\
- \id{check}~c~s~t = \id{fst}~(\id{elaborate}~c~s~(\id{just}~t));
- }
-\end{displaymath}
+So \id{elaborate} can receive the expected type from the context when it is
+available. This information can be used not only to reduce the need for type
+annotations, but also to adapt the elaborated code to what the context
+expects, as we will see below.
+%% Properly propagating type information is particularly important
+%% in languages with powerful type systems where checking equality between
+%% types can be non-trivial.
+%% We can then trivially define \id{check} and \id{infer} on top of it:
+%% \begin{displaymath}
+%% \MAlign{
+%% \id{infer}~c~s = \id{elaborate}~c~s~\id{nothing}; \\
+%% \id{check}~c~s~t = \id{fst}~(\id{elaborate}~c~s~(\id{just}~t));
+%% }
+%% \end{displaymath}
\subsection{Type inference}
Bidirectional type checking is helpful to propagate existing type
information, such as that given in top-level type annotations, but it is no
substitute for real type inference. In order to provide the same kind of
-experience as in other members of the ML family, Typer also uses a form of
-Hindley-Milner (HM) unification-based type inference with let-polymorphism.
+experience as in other members of the ML/Haskell family, Typer refines the
+type checking presented above with a form of Hindley-Milner (HM)
+unification-based type inference with let-polymorphism.
-HM inference is defined in a much restricted language than Typer, so it
-required some adjustments. For lack of a nice inference algorithm with
+HM is designed for a much restricted language than Typer, so it
+requires some adjustments. For lack of a nice inference algorithm with
principal types in ICC, Typer uses an ad-hoc algorithm which tries to be
simple enough to be understandable for the user and works about as well as
-HM on those programs that fall into the HM subset.
-
-HM performs specialization (i.e.~introduction of implicit (type)
-applications) every time an identifier is used, and every time an identifier
-is used it is fully specialized: all type arguments are instantiated.
-In the context of Typer this is sometimes too eager and at the same time it
-is insufficient: it can be too eager because an identifier can be passed to
-a function which expects an erasable (``polymorphic'') function argument in
-which case the identifier should be passed as-is without specializing it.
-And it can be insufficient because a normal function can return an erasable
-function so specialization can be needed also at other places than where
-identifiers are used. So, instead Typer performs specialization as a kind
-of coercion: whenever an expression with erasable function type is used in
-a context which does not expect an erasable function, that expression in
-specialized (i.e. Typer inserts a type application).
+HM on those programs that could be written in ML.
+
+HM is based on 3 elements: unification, automatic generalization, and
+automatic specialization. Typer's inference uses unification in the same
+way as HM: when \id{elaborate} is not provided with an expected type yet
+sees a construct for which the type cannot be inferred, rather than
+signaling an error it creates a new meta-variable which stands for the
+expected type; and in the reverse case where \id{elaborate} both receives an
+expected type and can infer the type of the expression, it tries to unify
+the two types, which will try to instantiate meta-variables such that the
+types become equal.
HM performs generalization (i.e.~introduction of implicit (type)
abstractions) whenever a value is defined in a let-binding. Typer does the
-same when the variable had a type annotation, but with the following
-difference: if a free meta-variable is used in a non-erasable way, we signal
+same when the variable has no type annotation, but with the following
+caveat: if a free meta-variable is used in a non-erasable way, we signal
an error since generalizing it with an erasable abstraction would lead to
invalid code.
+HM performs specialization (i.e.~introduction of implicit (type)
+applications) every time an identifier is used, by instantiating all its
+type arguments with fresh meta-variables. In the context of Typer this is
+sometimes too eager and at the same time it is insufficient: it can be too
+eager because an identifier can be passed to a function which expects an
+erasable (``polymorphic'') function argument, in which case the identifier
+should be passed as-is without specializing it. And it can be insufficient
+because a normal function can return an erasable function so specialization
+can be needed also at other places than where identifiers are used. So,
+instead Typer performs specialization as a kind of coercion: whenever an
+expression with erasable function type is used in a context which does not
+expect an erasable function, that expression is specialized (i.e.~Typer
+inserts a type application with a fresh meta-variable). This is an example
+where we take advantage of the bidirectional type checking's ability to
+propagate the type expected from the context.
+
For definitions that come with a type annotation, Typer also provides a form
of generalization: first, when elaborating a \emph{type annotation}, all
remaining free meta variables are generalized into erasable arrows, so the
@@ -681,7 +787,7 @@ user can write:
map : (?a -> ?b) -> List ?a -> List ?b;
\end{verbatim}
where we use the ``\texttt{?}'' prefix for user-written meta-variables, so
-Typer will add \texttt{?a} and \texttt{?b} as two additional erasable
+Typer will turn \texttt{?a} and \texttt{?b} into two additional erasable
arguments.
%% and this will be elaborated to a type equivalent to:
%% \begin{verbatim}
@@ -700,12 +806,21 @@ previous type annotation is followed by:
the elaboration will automatically add the two additional (erasable) $\lambda$
corresponding to \texttt{?a} and \texttt{?b}.
-We believe this behaves just like HM inference for the corresponding
-sublanguage, but have not shown it yet. Also we do not know whether this
-inference algorithm is guaranteed to terminate in theory, but it seems to
-perform well in practice. Given that macro-expansion is allowed to perform
-arbitrary side-effects, we have already given up the idea of guaranteeing
-termination of elaboration anyway.
+\begin{lemma}[HM equivalence]
+ For all expressions $e$ in context $\Gamma$ in the subset of Typer
+ corresponding to HM, Typer's inference elaborate $e$ to
+ $\Jtyper e \tau$ iff HM elaborates it to $\Jtyper e \tau$.
+\end{lemma}
+The proof is straightforward induction on $e$: the only significant
+difference is the conditions under which specialization happens, but in the
+language handled by HM all contexts expect monotypes and the only
+polymorphically typed elements are variables, so the conditions used by
+Typer degenerate to those of HM.
+
+We do not know whether this inference algorithm is guaranteed to
+terminate in theory, but it seems to perform well in practice. Given that
+macro-expansion is allowed to perform arbitrary side-effects, we have
+already given up the idea of guaranteeing termination of elaboration anyway.
\section{Core language}
@@ -777,7 +892,8 @@ annotations as well as all erasable arguments:
\end{displaymath}
This expresses the fact that erasable arguments do not influence evaluation.
So far, this is exactly like ICC*. But Typer extends this with
-impredicativity and with inductive types.
+%% impredicativity and with
+inductive types.
\subsection{Inductive types}
@@ -890,7 +1006,10 @@ the branch $en$ receives an (erasable and by default invisible) witness that
according to the needed refinement. Compared to Coq's \kw{match} construct,
this also eliminates the need to resort to the \emph{convoy
pattern}~\cite{CPDT} when this equality proof is needed for other reasons
-than to refine the return type of the branch.
+than to refine the return type of the branch. Of course, the downside is
+the need to manipulate those equality proofs, but we hope to hide (or
+auto-generate) these manipulations in the majority of cases by
+meta-programming.
In order for the return type of \kw{case} to be able to depend on the value
of the object analyzed, each branch additionally receives another equality
@@ -905,129 +1024,119 @@ normal variables. When needed, the source code can also make them visible,
for example the pattern \texttt{nil (P := iszero)} would let \id{iszero}
refer to the (still erasable) proof that ``$\id{Eq}~n~zero$''.
-\subsection{Erasable impredicative arguments}
-
-The other important difference between Typer and ICC* is the treatment of
-impredicativity. ICC* follows the approach of making every universe
-predicative except for the bottom universe, called \kw{Prop} or \kw{Set},
-which is impredicative. Usually \kw{Prop} corresponds to an impredicative
-universe that can be erased during program extraction and \kw{Set} is its
-non-erasable counterpart.
-
-We hope that the erasability of \kw{Prop} would be somewhat redundant with
-ICC*'s own notion of erasability, so we did not want to distinguish \kw{Set}
-from \kw{Prop}. Also, while the typing rules of the calculus of
-constructions are not made more complex by \kw{Prop} and \kw{Set}'s
-impredicativity, the same is not true in the presence of inductive types
-where soundness requires disallowing strong elimination on large
-inductive types.
-
-So we decided to introduce impredicativity differently: Typer does not come
-with an impredicative universe like \kw{Set} or \kw{Prop}, and instead it lets
-its erasable functions be impredicative. To see what this means, let's
-consider the rules of the colored pure type system of ICC*:
-\begin{displaymath}
- \begin{array}{lcl}
- \mathcal{S} &=& \{~ \kw{Prop}; \kw{Type}~\ell ~|~ \ell\in\mathbb{N} ~\} \\
- \mathcal{A} &=&
- \{~ (\kw{Prop} : \kw{Type}~0);
- (\kw{Type}~\ell : \kw{Type}~(\ell+1)) ~|~ \ell\in\mathbb{N} ~\} \\
- \mathcal{R} &=&
- \MAlign{
- \{~ (k, \kw{Prop}, s, s);
- (k, s, \kw{Prop}, \kw{Prop}) ~|~ s \in \mathcal{S} ~\} \\
- \cup ~\{~
- (k, \kw{Type}~\ell_1, \kw{Type}~\ell_2, \kw{Type}~(\ell_1 \sqcup \ell_2))
- ~|~ \ell_1,\ell_2 \in\mathbb{N} ~\}
- }
- \end{array}
-\end{displaymath}
-The $(k, s, \kw{Prop}, \kw{Prop})$ is the relevant rule above that allows
-impredicativity for \kw{Prop}. One can divide this rule into two: $(\kw{e},
-s, \kw{Prop}, \kw{Prop})$ and $(\kw{n}, s, \kw{Prop}, \kw{Prop})$. if you
-consider Typer's $\Type 0$ as ICC's \kw{Prop}, the first rule is included in
-Typer but not the second. As it turns out, the second, is actually
-redundant in ICC*. More specifically:
-
-\begin{lemma}[Erasability of impredicative arguments]
- \label{lem:erasable}
- In ICC*, if $\Jtyper x {\tau_x : \kw{Type}~\ell}$ and $\Jtyper e {\tau_e :
- \kw{Prop}}$, then $x$ can only appear in $\Ferase e$ within arguments
- to functions of type $(y\:\tau_1) \TEarw \tau_2$ where $\tau_2 : \kw{Prop}$ and
- $\tau_1 : \kw{Type}~\_$.
-\end{lemma}
-\begin{proof}
- By induction on the derivation of $\Jtyper[\Gamma_e] e {\tau_e}$. Since $\tau_e :
- \kw{Prop}$, clearly $e$ can neither be a sort nor an arrow type and it
- cannot be $x$ itself either, so it can only be either a $\lambda y\:\tau_y \to e_y$ or
- an application $e_1~e_2$. We can apply the induction
- hypothesis to $e_y$ and to $e_1$. As for $e_2$, there are two cases:
- either $e_1$ takes an argument of type $\tau_1\:\kw{Type}~\ell'$ in which case
- we're done, or it takes an argument of type $\tau_1\:\kw{Prop}$ in which case
- we can again apply the induction hypothesis.
-\end{proof}
-Corollary: ICC*'s rule $(\kw{n}, s, \kw{Prop}, \kw{Prop})$ is redundant since
-we could convert all the impredicative functions that use it to functions
-that use $(\kw{e}, s, \kw{Prop}, \kw{Prop})$ instead.
-
-\subsection{Strong elimination of large inductive types}
-
-If we extend ICC* with Coq-style inductive types, Lemma~\ref{lem:erasable}
-does not hold any more because we can perform a \kw{case} analysis on an
-argument in universe $\Type\ell$ and return something in universe \kw{Prop}.
-For this reason, while the restriction of impredicativity to
-erasable functions does not make Typer weaker than ICC* it does make it in
-this respect weaker than CIC. But Typer is incomparable to CIC because in
-another respect it allows things that CIC does not.
-
-As mentioned before, CIC has a special restriction that large inductive
-types (i.e.~inductive types that belong to a universe that is smaller than
-some of the values it carries) cannot be used in a strong elimination
-(i.e.~a \kw{case} analysis that returns a type in a universe larger than
-that of the object analyzed).
-
-This restriction means for example that while we can define in Coq
-a large inductive type like:
-\begin{verbatim}
- Inductive Ω : Set :=
- | int : Ω
- | arw : Ω -> Ω -> Ω
- | all : forall k:Set, (k -> Ω) -> Ω.
-\end{verbatim}
-we cannot prove properties such as the following (which we needed
-while working on~\cite{Monnier07}):
-\begin{verbatim}
- forall K₁ K₂ F₁ F₂ P,
- all K₁ F₁ = all K₂ F₂ -> P K₁ F₁ -> P K₂ F₂.
-\end{verbatim}
-This important restriction significantly reduces the applicability of large
-inductive types, but is needed because it would be otherwise possible
-to ``smuggle'' a large element within an inductive object of a smaller
-universe and take it back out later, resulting in
-unsoundness~\cite{Coquand86b}.
-
-Since Typer's impredicativity is limited to erasable elements, those large
-elements cannot really be taken back out later anyway, by virtue of their
-erasability. For this reason, we \emph{conjecture} that our form of
-impredicativity does not require this restriction on strong elimination.
-As a consequence, in Typer we can define the above inductive type (with an
-erasable $k$) and prove its property (again with erasable $K_1$ and $K_2$).
-
-The weak justification behind it, is a philosophical one: erasable arguments
-are not \emph{significant}, so a function that takes an erasable argument
-could be considered as a mere ``schema'' or ``prototype'' which stands for
-all the specialized versions of the function. A similar argument is
-discussed by Fruchart and Longo in~\cite{Fruchart96}.
-
-%% \subsection{Impredicativity rules of Typer}
-
-%% Once we decided to try our luck with this conjecture, it was a small step to
-%% add yet more potentially risky features to our calculus. So we currently
-%% also allow impredicativity at all universe levels rather than only at the
-%% bottom. Clearly, this risks falling victim of paradoxes like
-%% Hurken's~\cite{Hurkens95}, so buyer beware: we have no yet made any serious
-%% attempt at proving or disproving the soundness of this extension.
-
+%% \subsection{Erasable impredicative arguments}
+
+%% The other important difference between Typer and ICC* is the treatment of
+%% impredicativity. ICC* follows the approach of making every universe
+%% predicative except for the bottom universe, called \kw{Prop} or \kw{Set},
+%% which is impredicative. Usually \kw{Prop} corresponds to an impredicative
+%% universe that can be erased during program extraction and \kw{Set} is its
+%% non-erasable counterpart.
+
+%% We hope that the erasability of \kw{Prop} would be somewhat redundant with
+%% ICC*'s own notion of erasability, so we did not want to distinguish \kw{Set}
+%% from \kw{Prop}. Also, while the typing rules of the calculus of
+%% constructions are not made more complex by \kw{Prop} and \kw{Set}'s
+%% impredicativity, the same is not true in the presence of inductive types
+%% where soundness requires disallowing strong elimination on large
+%% inductive types.
+
+%% So we decided to introduce impredicativity differently: Typer does not come
+%% with an impredicative universe like \kw{Set} or \kw{Prop}, and instead it lets
+%% its erasable functions be impredicative. To see what this means, let's
+%% consider the rules of the colored pure type system of ICC*:
+%% \begin{displaymath}
+%% \begin{array}{lcl}
+%% \mathcal{S} &=& \{~ \kw{Prop}; \kw{Type}~\ell ~|~ \ell\in\mathbb{N} ~\} \\
+%% \mathcal{A} &=&
+%% \{~ (\kw{Prop} : \kw{Type}~0);
+%% (\kw{Type}~\ell : \kw{Type}~(\ell+1)) ~|~ \ell\in\mathbb{N} ~\} \\
+%% \mathcal{R} &=&
+%% \MAlign{
+%% \{~ (k, \kw{Prop}, s, s);
+%% (k, s, \kw{Prop}, \kw{Prop}) ~|~ s \in \mathcal{S} ~\} \\
+%% \cup ~\{~
+%% (k, \kw{Type}~\ell_1, \kw{Type}~\ell_2, \kw{Type}~(\ell_1 \sqcup \ell_2))
+%% ~|~ \ell_1,\ell_2 \in\mathbb{N} ~\}
+%% }
+%% \end{array}
+%% \end{displaymath}
+%% The $(k, s, \kw{Prop}, \kw{Prop})$ is the relevant rule above that allows
+%% impredicativity for \kw{Prop}. One can divide this rule into two: $(\kw{e},
+%% s, \kw{Prop}, \kw{Prop})$ and $(\kw{n}, s, \kw{Prop}, \kw{Prop})$. if you
+%% consider Typer's $\Type 0$ as ICC's \kw{Prop}, the first rule is included in
+%% Typer but not the second. As it turns out, the second, is actually
+%% redundant in ICC*. More specifically:
+
+%% \begin{lemma}[Erasability of impredicative arguments]
+%% \label{lem:erasable}
+%% In ICC*, if $\Jtyper x {\tau_x : \kw{Type}~\ell}$ and $\Jtyper e {\tau_e :
+%% \kw{Prop}}$, then $x$ can only appear in $\Ferase e$ within arguments
+%% to functions of type $(y\:\tau_1) \TEarw \tau_2$ where $\tau_2 : \kw{Prop}$ and
+%% $\tau_1 : \kw{Type}~\_$.
+%% \end{lemma}
+%% \begin{proof}
+%% By induction on the derivation of $\Jtyper[\Gamma_e] e {\tau_e}$. Since $\tau_e :
+%% \kw{Prop}$, clearly $e$ can neither be a sort nor an arrow type and it
+%% cannot be $x$ itself either, so it can only be either a $\lambda y\:\tau_y \to e_y$ or
+%% an application $e_1~e_2$. We can apply the induction
+%% hypothesis to $e_y$ and to $e_1$. As for $e_2$, there are two cases:
+%% either $e_1$ takes an argument of type $\tau_1\:\kw{Type}~\ell'$ in which case
+%% we're done, or it takes an argument of type $\tau_1\:\kw{Prop}$ in which case
+%% we can again apply the induction hypothesis.
+%% \end{proof}
+%% Corollary: ICC*'s rule $(\kw{n}, s, \kw{Prop}, \kw{Prop})$ is redundant since
+%% we could convert all the impredicative functions that use it to functions
+%% that use $(\kw{e}, s, \kw{Prop}, \kw{Prop})$ instead.
+
+%% \subsection{Strong elimination of large inductive types}
+
+%% If we extend ICC* with Coq-style inductive types, Lemma~\ref{lem:erasable}
+%% does not hold any more because we can perform a \kw{case} analysis on an
+%% argument in universe $\Type\ell$ and return something in universe \kw{Prop}.
+%% For this reason, while the restriction of impredicativity to
+%% erasable functions does not make Typer weaker than ICC* it does make it in
+%% this respect weaker than CIC. But Typer is incomparable to CIC because in
+%% another respect it allows things that CIC does not.
+
+%% As mentioned before, CIC has a special restriction that large inductive
+%% types (i.e.~inductive types that belong to a universe that is smaller than
+%% some of the values it carries) cannot be used in a strong elimination
+%% (i.e.~a \kw{case} analysis that returns a type in a universe larger than
+%% that of the object analyzed).
+
+%% This restriction means for example that while we can define in Coq
+%% a large inductive type like:
+%% \begin{verbatim}
+%% Inductive Ω : Set :=
+%% | int : Ω
+%% | arw : Ω -> Ω -> Ω
+%% | all : forall k:Set, (k -> Ω) -> Ω.
+%% \end{verbatim}
+%% we cannot prove properties such as the following (which we needed
+%% while working on~\cite{Monnier07}):
+%% \begin{verbatim}
+%% forall K₁ K₂ F₁ F₂ P,
+%% all K₁ F₁ = all K₂ F₂ -> P K₁ F₁ -> P K₂ F₂.
+%% \end{verbatim}
+%% This important restriction significantly reduces the applicability of large
+%% inductive types, but is needed because it would be otherwise possible
+%% to ``smuggle'' a large element within an inductive object of a smaller
+%% universe and take it back out later, resulting in
+%% unsoundness~\cite{Coquand86b}.
+
+%% Since Typer's impredicativity is limited to erasable elements, those large
+%% elements cannot really be taken back out later anyway, by virtue of their
+%% erasability. For this reason, we \emph{conjecture} that our form of
+%% impredicativity does not require this restriction on strong elimination.
+%% As a consequence, in Typer we can define the above inductive type (with an
+%% erasable $k$) and prove its property (again with erasable $K_1$ and $K_2$).
+
+%% The weak justification behind it, is a philosophical one: erasable arguments
+%% are not \emph{significant}, so a function that takes an erasable argument
+%% could be considered as a mere ``schema'' or ``prototype'' which stands for
+%% all the specialized versions of the function. A similar argument is
+%% discussed by Fruchart and Longo in~\cite{Fruchart96}.
\section{Related work}
@@ -1049,16 +1158,43 @@ predecessors to be able to list them all. We will try and limit ourselves
to some recent systems which share enough of their design or their
goals here.
-\paragraph{Honu}\hspace{-10pt}~\cite{Rafkind12} is a programming language in
-the Racket system which provides an extensible infix/mixfix syntax
-integrated with Racket's metaprogramming facilities.
-\textbf{Typed Racket}~\cite{Felleisen11} uses an extension
-of Scheme's macro system to implement a statically typed variant of Racket
-as a sort of embedded DSL, thus implementing the type checker as part of
-a macro. It shares with Typer the characteristic of mixing Lisp-style
-macros and static typing, and generally the Racket system shares with Typer
-the goal of a being a ``language workbench'' on top of which other languages
-can easily be defined, Typed Racket and Honu being just some examples.
+\textbf{Honu}~\cite{Rafkind12} is a programming language in
+the Racket system which provides an extensible infix/prefix syntax
+integrated with Racket's metaprogramming facilities. Contrary to Typer, it
+delays some of the infix parsing to the macro-expansion phase, where grammar
+extensions are tied to macros. It can also handle more complex grammars
+than Typer, which can in turn lead to unexpected interactions between
+macros.
+The \textbf{Star} language~\cite{McCabe13} is a statically
+typed programming language which also makes it easy to define embedded DSLs
+via syntactic and macro expansion facilities, relying also on an OPG
+grammar, but contrary to Typer, it expands macros in a separate phase, which
+hence can't get access to the typing context.
+
+\textbf{Template Haskell}~\cite{Sheard02}
+is an extension of Haskell to allow
+compile time metaprogramming. %% One of the main contribution of Template
+%% Haskell is to implement a metaprogramming system on top of a strongly typed
+%% purely functional language.
+%% For such purpose, our use of a
+%% monadic technique in Typer comes from Template Haskell.
+%%
+Typer's interleaving of type inference and macro expansion is very similar
+to that of Template Haskell. But Typer and Template Haskell differ in how
+the macros are used by the programmer: in Template Haskell, macro calls are
+made explicit in the source file by preceding them with a \Char{\$} sign rather
+than being determined by their type.
+Also Template Haskell is not meant to add new binding forms to the language:
+arguments to the macro are type checked before being passed to the macro.
+
+%% \textbf{Typed Racket}~\cite{Felleisen11} uses an extension
+%% of Scheme's macro system to implement a statically typed variant of Racket
+%% as a sort of embedded DSL, thus implementing the type checker as part of
+%% a macro. It shares with Typer the characteristic of mixing Lisp-style
+%% macros and static typing, and generally the Racket system shares with Typer
+%% the goal of a being a ``language workbench'' on top of which other languages
+%% can easily be defined, Typed Racket and Honu being just some examples.
+
%% At first look one might see Typer as a dependently typed Typed
%% Racket. Indeed both have a powerful macro system and a static type
%% system. But there are some important differences. The first difference
@@ -1073,15 +1209,13 @@ can easily be defined, Typed Racket and Honu being just some examples.
%% at the expansion site. Typed Racket and more generally Scheme macros, on
%% the other hand, do not have access to the lexical environment at the
%% expansion site.
-The \textbf{Star} language~\cite{McCabe13} is a statically
-typed programming language which also makes it easy to define embedded DSLs
-via syntactic and macro expansion facilities.
-\textbf{Scala} also provides sophisticated meta programming~\cite{Burmako13}
-and staged computation~\cite{Rompf13} facilities used in novel ways.
-\textbf{OCaml} offers extensible syntax and metaprogramming
-facilities in various forms, such as via its Camlp4
-system~\cite{de2003camlp4} and more recently with \emph{extension points},
-which work like macros, by mapping OCaml AST to OCaml AST.
+
+Many languages like \textbf{Scala}~\cite{Burmako13},
+\textbf{OCaml}~\cite{de2003camlp4}, and \textbf{Coq}~\cite{Coq00} offer
+extensible syntax and metaprogramming facilities in various forms, but these
+are bolted on pre-existing languages so they are less flexible and more
+complex than what we wanted in Typer.
+
%% where the ``P4'' stands for PreProcessor
%% and Pretty-Printer. Much like a macro system, it allows the programmer to
%% describe an extension to the OCaml parser. The job of CamlP4 is to convert
@@ -1109,49 +1243,33 @@ which work like macros, by mapping OCaml AST to OCaml AST.
%% OCaml syntax and would require annotation in Typer if we were to use
%% such strategy. We therefore think Typer has hit a sweet spot between
%% Lisp and OCaml.
-
-\paragraph{Template Haskell}\hspace{-10pt}~\cite{Sheard02}
-is an extension of Haskell to allow
-compile time metaprogramming. One of the main contribution of Template
-Haskell is to implement a metaprogramming system on top of a strongly typed
-purely functional language. %% For such purpose, our use of a
-%% monadic technique in Typer comes from Template Haskell.
-%%
-Typer's interleaving of type inference and macro expansion is very similar
-to that of Template Haskell. But Typer and Template Haskell differ on how
-the macros are used by the programmer: in Template Haskell, macro calls are
-made explicit in the source file by preceding them with a \Char{\$} sign rather
-than being determined by their type.
-Also Template Haskell is not meant to add new binding forms to the language:
-arguments to the macro are type checked before being passed to the macro.
%%
-\paragraph{Idris}\hspace{-10pt}~\cite{Brady13} and
+\textbf{Idris}~\cite{Brady13} and
\textbf{F-Star}~\cite{Swamy16} are programming languages with dependent
-types. They shares many of Typer's goals and also offers metaprogramming
-facilities, although these facilities are more aimed at writing proofs,
-while Typer's metaprogramming facilities are more currently geared toward
-syntactic extensions.
+types that share many of Typer's goals and also offers metaprogramming
+facilities, but these facilities are more aimed at writing proofs, so they
+do not contribute as much as they could to simplifying the design of
+the language.
%%
-\textbf{Zombie}~\cite{Casinghino14} is an experimental
-programming language with dependent types. One of its most novel features
-is to eschew automatic reductions at the type level and require manual cast
-operations instead. This is a bit like of Typer's intentionally weak typing
-rule for \kw{case}, relying on explicit cast operations using type equality
-witnesses for type refinement, but pushed yet further.
-
-\paragraph{Agda}\hspace{-10pt}~\cite{Bove09} is a proof assistant with
+%% \textbf{Zombie}~\cite{Casinghino14} is an experimental
+%% programming language with dependent types. One of its most novel features
+%% is to eschew automatic reductions at the type level and require manual cast
+%% operations instead. Typer's intentionally weak typing
+%% rule for \kw{case}, relying on explicit cast operations using type equality
+%% witnesses for type refinement, goes in the same direction.
+
+\textbf{Agda}~\cite{Bove09} is a proof assistant with
a syntax similar to Haskell's but with the possibility of adding mixfix and
-not just infix operators. Their use of mixfix operators like
-\id{if\_then\_else\_} as a way to add new syntactic forms is what gave us
-the idea of adding mixfix to S-expressions in Typer using operator
-precedence grammar. For a more detailed and formal discussion on mixfix
-operators and Agda, see~\cite{Danielsson08}.
+not just infix operators. Their use of mixfix operators
+like \id{if\_then\_else\_} as a way to add new syntactic forms is what gave
+us the idea of adding mixfix to S-expressions in Typer using operator
+precedence grammar~\cite{Danielsson08}.
%%
-\textbf{Coq}~\cite{Coq00} has syntactic extensions similar
-to mixfix as well as a sophisticated metaprogramming language known as
-Ltac~\cite{Delahaye00}. More recently other metaprogramming languages have
-been designed for it such as Mtac~\cite{Ziliani13} and Rtac~\cite{Malecha16}.
+%% has syntactic extensions similar
+%% to mixfix as well as a sophisticated metaprogramming language known as
+%% Ltac~\cite{Delahaye00}. More recently other metaprogramming languages have
+%% been designed for it such as Mtac~\cite{Ziliani13} and Rtac~\cite{Malecha16}.
%% Coq's syntactic
%% extensions, based on CamlP4, are fairly sophisticated. But Coq's
%% metaprogramming language is a separate language that is very different from
@@ -1247,6 +1365,14 @@ functional programming languages. Its design is generally conservative in
that it mostly uses existing solutions, but tries to streamline them and
combine them is ways which hopefully simplify the overall system while
making it more flexible at the same time.
+%%
+Its strength relies mainly in its syntactic flexibility and meta-programming
+facilities, although these are currently mainly focused on writing syntax
+extensions, and we plan to extend them so as to be able to manipulate also
+the \id{Lexp} representation of the code, more convenient when
+generating proofs.
+Its current main weakness is lack of hygiene, which needs to be addressed
+urgently.
While it has not been officially released yet, its code can be found at
\url{https://gitlab.com/monnier/typer}.
@@ -1264,5 +1390,5 @@ While it has not been officially released yet, its code can be found at
\bibliographystyle{alpha}
%% \bibliography{typer_theory}
-\bibliography{/u/monnier/share/misc/all}
+\bibliography{refs}
\end{document}
=====================================
refs.bib
=====================================
@@ -0,0 +1,693 @@
+
+
+@inproceedings{Barras08,
+ address = {Budapest, Hungary},
+ author = {Bruno Barras and Bruno Bernardo},
+ booktitle = {Conference on Foundations of Software Science and
+ Computation Structures},
+ month = apr,
+ series = {Lecture Notes in Computer Science},
+ title = {Implicit Calculus of Constructions as a Programming
+ Language with Dependent Types},
+ volume = {4962},
+ year = {2008},
+ abstract = {In this paper, we show how Miquel’s Implicit
+ Calculus of Constructions (ICC) can be used as a
+ programming language featuring dependent types. Since
+ this system has an undecidable type-checking, we
+ introduce a more verbose variant, called ICC∗ which
+ fixes this issue. Datatypes and program
+ specifications are enriched with logical assertions
+ (such as preconditions, postconditions, invariants)
+ and programs are decorated with proofs of those
+ assertions. The point of using ICC∗ rather than the
+ Calculus of Constructions (the core formalism of the
+ Coq proof assistant) is that all of the static
+ information (types and proof objects) is transparent,
+ in the sense that it does not affect the
+ computational behavior. This is concretized by a
+ built-in extraction procedure that removes this
+ static information. We also illustrate the main
+ features of ICC∗ on classical examples of
+ dependently typed programs.},
+ url = {http://www.lix.polytechnique.fr/\~{}bernardo/writings/
+ barras-bernardo-icc-fossacs08.pdf},
+}
+
+@inproceedings{Sheard02,
+ address = {Pittsburgh, Pennsylvania},
+ author = {Tim Sheard and Simon {Peyton Jones}},
+ booktitle = {Haskell Workshop},
+ month = oct,
+ pages = {1-16},
+ publisher = {ACM Press},
+ title = {Template metaprogramming for {Haskell}},
+ year = {2002},
+ url = {http://dl.acm.org/citation.cfm?id=636528},
+}
+
+@inproceedings{Danielsson08,
+ author = {Nils Anders Danielsson and Ulf Norell},
+ booktitle = {Implementation and Application of Functional
+ Languages},
+ key = {IFL'08},
+ month = sep,
+ pages = {80-99},
+ series = {Lecture Notes in Computer Science},
+ title = {Parsing Mixfix Operators},
+ volume = {6836},
+ year = {2008},
+ abstract = {A simple grammar scheme for expressions containing
+ mixfix operators is presented. The scheme is
+ parameterised by a precedence relation which is only
+ restricted to be a directed acyclic graph; this makes
+ it possible to build up precedence relations in a
+ modular way. Efficient and simple implementations of
+ parsers for languages with user-defined mixfix
+ operators, based on the grammar scheme, are also
+ discussed. In the future we plan to replace the
+ support for mixfix operators in the language Agda
+ with a grammar scheme and an implementation based on
+ this work.},
+ url = {https://pdfs.semanticscholar.org/6598/
+ 4aee4eccd577f9dd35154b1d24b157050d23.pdf},
+}
+
+@article{Floyd63,
+ author = {Robert W. Floyd},
+ journal = {Journal of the {ACM}},
+ month = jul,
+ number = {3},
+ pages = {316-333},
+ title = {Syntactic Analysis and Operator Precedence},
+ volume = {10},
+ year = {1963},
+ abstract = {Three increasingly restricted types of formal grammar
+ are phrase structure grammars, operator grammars and
+ precedence grammars. Precedence grammars form models
+ of mathematical and algorithmic languages which may
+ be analyzed mechanically by a simple procedure based
+ on a matrix representation of a precedence relation
+ between character pairs.},
+ url = {http://dl.acm.org/citation.cfm?doid=321172.321179},
+}
+
+@article{Cockx18,
+ author = {Jesper Cockx and Andreas Abel},
+ journal = {Proceedings of the {ACM} on Programming Languages},
+ number = {ICFP},
+ pages = {75:1--75:30},
+ title = {Elaborating dependent (co)pattern matching},
+ volume = {2},
+ year = {2018},
+ abstract = {In a dependently typed language, we can guarantee
+ correctness of our programs by providing formal
+ proofs. To check them, the typechecker elaborates
+ these programs and proofs into a low level core
+ language. However, this core language is by nature
+ hard to understand by mere humans, so how can we know
+ we proved the right thing? This question occurs in
+ particular for dependent copattern matching, a
+ powerful language construct for writing programs and
+ proofs by dependent case analysis and mixed
+ induction/coinduction. A definition by copattern
+ matching consists of a list of clauses that are
+ elaborated to a case tree, which can be further
+ translated to primitive eliminators. In previous work
+ this second step has received a lot of attention, but
+ the first step has been mostly ignored so far. We
+ present an algorithm elaborating definitions by
+ dependent copattern matching to a core language with
+ inductive datatypes, coinductive record types, an
+ identity type, and constants defined by well-typed
+ case trees. To ensure correctness, we prove that
+ elaboration preserves the first-match semantics of
+ the user clauses. Based on this theoretical work, we
+ reimplement the algorithm used by Agda to check
+ left-hand sides of definitions by pattern matching.
+ The new implementation is at the same time more
+ general and less complex, and fixes a number of bugs
+ and usability issues with the old version. Thus we
+ take another step towards the formally verified
+ implementation of a practical dependently typed
+ language.},
+ url = {https://dl.acm.org/citation.cfm?id=3236770},
+}
+
+@inproceedings{Christiansen16,
+ author = {David Christiansen and Edwin Brady},
+ booktitle = {International Conference on Functional Programming},
+ key = {ICFP'16},
+ month = sep,
+ pages = {284-297},
+ title = {Elaborator Reflection: Extending {I}dris in {I}dris},
+ year = {2016},
+ abstract = {Many programming languages and proof assistants are
+ defined by elaboration from a high-level language
+ with a great deal of implicit information to a highly
+ explicit core language. In many advanced languages,
+ these elaboration facilities contain powerful tools
+ for program construction, but these tools are rarely
+ designed to be repurposed by users. We describe
+ elaborator reflection, a paradigm for metaprogramming
+ in which the elaboration machinery is made directly
+ available to metaprograms, as well as a concrete
+ realization of elaborator reflection in Idris, a
+ functional language with full dependent types. We
+ demonstrate the applicability of Idris’s reflected
+ elaboration framework to a number of realistic
+ problems, we discuss the motivation for the specific
+ features of its design, and we explore the broader
+ meaning of elaborator reflection as it can relate to
+ other languages.},
+ url = {https://eb.host.cs.st-andrews.ac.uk/drafts/elab-
+ reflection.pdf},
+}
+
+@article{Barendregt91b,
+ author = {Henk P. Barendregt},
+ journal = {Journal of Functional Programming},
+ month = apr,
+ number = {2},
+ pages = {121-154},
+ title = {Introduction to generalized type systems},
+ volume = {1},
+ year = {1991},
+ abstract = {Programming languages often come with type systems.
+ Some of these are simple, others are sophisticated.
+ As a stylistic representation of types in programming
+ languages several versions of typed lambda calculus
+ are studied. During the last 20 years many of these
+ systems have appeared, so there is some need of
+ classification. Working towards a taxonomy,
+ Barendregt (1991) gives a fine-structure of the
+ theory of constructions (Coquand and Huet 1988) in
+ the form of a canonical cube of eight type systems
+ ordered by inclusion. Berardi (1988) and Terlouw
+ (1988) have independently generalized the method of
+ constructing systems in the λ-cube. Moreover,
+ Berardi (1988, 1990) showed that the generalized type
+ systems are flexible enough to describe many logical
+ systems. In that way the well-known
+ propositions-as-types interpretation obtains a nice
+ canonical form.},
+ url = {http://www.jeremyherve.net/bar91.pdf},
+}
+
+@article{Pierce00,
+ author = {Benjamin C. Pierce and David M. Turner},
+ journal = {Transactions on Programming Languages and Systems},
+ month = {Jan},
+ number = {1},
+ pages = {1-44},
+ title = {Local type inference},
+ volume = {22},
+ year = {2000},
+ abstract = {We study two partial type inference methods for a
+ language combining subtyping and impredicative
+ polymorphism. Both methods are local in the sense
+ that missing annotations are recovered using only
+ information from adjacent nodes in the syntax tree,
+ without long-distance constraints such as unification
+ variables. One method infers type arguments in
+ polymorphic applications using a local constraint
+ solver. The other infers annotations on bound
+ variables in function abstractions by propagating
+ type constraints downward from enclosing application
+ nodes. We motivate our design choices by a
+ statistical analysis of the uses of type inference in
+ a sizable body of existing ML code.},
+ url = {http://dl.acm.org/citation.cfm?id=345100},
+}
+
+@inproceedings{Pfenning99,
+ author = {Frank Pfenning and Carsten Schürmann},
+ booktitle = {International Conference on Automated Deduction},
+ key = {CADE-16},
+ month = jul,
+ pages = {202-206},
+ series = {Lecture Notes in Artificial Intelligence},
+ title = {System description: Twelf - a meta-logical framework
+ for deductive systems},
+ volume = {1632},
+ year = {1999},
+}
+
+@inproceedings{MishraLinger08,
+ address = {Budapest, Hungary},
+ author = {Nathan Mishra-Linger and Tim Sheard},
+ booktitle = {Conference on Foundations of Software Science and
+ Computation Structures},
+ month = apr,
+ pages = {350-364},
+ series = {Lecture Notes in Computer Science},
+ title = {Erasure and Polymorphism in Pure Type Systems},
+ volume = {4962},
+ year = {2008},
+ abstract = {We introduce Erasure Pure Type Systems, an extension
+ to Pure Type Systems with an erasure semantics
+ centered around a type constructor indicating
+ parametric polymorphism. The erasure phase is guided
+ by lightweight program annotations. The typing rules
+ guarantee that well-typed programs obey a phase
+ distinction between erasable (compile-time) and
+ non-erasable (run-time) terms. The erasability of an
+ expression depends only on how its value is used in
+ the rest of the program. Despite this simple
+ observation, most languages treat erasability as an
+ intrinsic property of expressions, leading to code
+ duplication problems. Our approach overcomes this
+ deficiency by treating erasability extrinsically.
+ Because the execution model of EPTS generalizes the
+ familiar notions of type erasure and parametric
+ polymorphism, we believe functional programmers will
+ find it quite natural to program in such a setting.},
+}
+
+@article{Bernardy12,
+ author = {Jean-Philippe Bernardy and Patrik Jansson and
+ Ross Paterson},
+ journal = {Journal of Functional Programming},
+ number = {2},
+ pages = {1-46},
+ title = {Proofs for free: Parametricity for dependent types},
+ volume = {22},
+ year = {2012},
+ abstract = {Reynolds' abstraction theorem shows how a typing
+ judgement in System F can be translated into a
+ relational statement (in second order predicate
+ logic) about inhabitants of the type. We obtain a
+ similar result for pure type systems (PTSs): for any
+ PTS used as a programming language, there is a PTS
+ that can be used as a logic for parametricity. Types
+ in the source PTS are translated to relations
+ (expressed as types) in the target. Similarly, values
+ of a given type are translated to proofs that the
+ values satisfy the relational interpretation. We
+ extend the result to inductive families. We also show
+ that the assumption that every term satisfies the
+ parametricity condition generated by its type is
+ consistent with the generated logic.},
+ url = {http://doi.acm.org/10.1017/S0956796812000056},
+}
+
+@inproceedings{Bernardo09,
+ author = {Bruno Bernardo},
+ booktitle = {International Conference on Theorem Proving in
+ Higher-Order Logics},
+ key = {TPHOLs09},
+ month = aug,
+ series = {Lecture Notes in Computer Science},
+ title = {Towards an Implicit Calculus of Inductive
+ Constructions. Extending the Implicit Calculus of
+ Constructions with Union and Subset Types},
+ volume = {5674},
+ year = {2009},
+ abstract = {We present extensions of Miquel's Implicit Calculus
+ of Constructions (ICC) and Barras and Bernardo's
+ decidable Implicit Calculus of Constructions (ICC*)
+ with union and subset types. The purpose of these
+ systems is to solve the problem of interaction
+ betweeen logical and computational data. This is a
+ work in progress and our long term goal is to add the
+ whole inductive types to ICC and ICC* in order to
+ define a complete framework for theorem proving.},
+ url = {https://hal.inria.fr/inria-00432649},
+}
+
+@techreport{Gimenez94,
+ author = {Eduardo Giménez},
+ institution = {École Normale Supérieure de Lyon},
+ number = {RR1995-07},
+ title = {Codifying guarded definitions with recursive schemes},
+ year = {1994},
+ abstract = {We formalize an extension of the Calculus of
+ Constructions with inductive and coinductive types
+ which allows a more direct description of recursive
+ definitions. The approach we follow is close to the
+ one proposed by Thierry Coquand for Martin-Loef's
+ type theory in his paper ``Infinite Objects in Type
+ Theory''. Recursive objects can be defined by
+ fixed-point definitions as in functional programming
+ languages, and a syntactical checking of these
+ definitions avoids the introduction of
+ non-normalizable terms. We show that the conditions
+ for accepting a recursive definition proposed in the
+ mentioned paper are not sufficient for the Calculus
+ of Constructions, and we modify them. As a way of
+ justifying our conditions, we develop a general
+ method to codify a fix point definition satisfying
+ them using well-known recursive schemes, like
+ primitive recursion and co-recursion. We also propose
+ different reduction rules from the ones used in [5]
+ in order to obtain a decidable conversion relation
+ for the system.},
+ url = {ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/RR/RR1995/RR1995-
+ 07.ps.Z},
+}
+
+@inproceedings{Sheard04,
+ address = {Cork},
+ author = {Tim Sheard and Emir Pašalić},
+ booktitle = {Logical Frameworks and Meta-Languages},
+ month = jul,
+ title = {Meta-programming with built-in type equality},
+ year = {2004},
+}
+
+@book{CPDT,
+ author = {Adam Chlipala},
+ publisher = {MIT Press},
+ title = {Certified Programming with Dependent Types},
+ year = {2013},
+}
+
+@inproceedings{Rafkind12,
+ author = {Jon Rafkind and Matthew Flatt},
+ booktitle = {Proceedings of Generative Programming and Component
+ Engineering},
+ journal = {Proceedings Generative Programming and Component
+ Engineering},
+ key = {GPCE'12},
+ title = {Honu: A Syntactically Extensible Language},
+ year = {2012},
+ abstract = {Honu is a new language that fuses traditional
+ algebraic notation (e.g., infix binary operators)
+ with Scheme-style language extensibility. A key
+ element of Honu’s design is an enforestation
+ parsing step, which converts a flat stream of tokens
+ into an S-expression-like tree, in addition to the
+ initial “read” phase of parsing and interleaved
+ with the “macro-expand” phase. We present the
+ design of Honu, explain its parsing and
+ macro-expansion algorithm, and show example syntactic
+ extensions.},
+}
+
+@inproceedings{McCabe13,
+ author = {Frank McCabe and Michael Sperber},
+ booktitle = {International Conference on Principles and Practices
+ of Programming on the Java Platform},
+ key = {PPPJ'13},
+ month = sep,
+ pages = {89-100},
+ title = {Feel Different on the {J}ava Platform: The {S}tar
+ Programming Language},
+ year = {2013},
+ abstract = {Star is a functional, multi-paradigm and extensible
+ programming language that runs on the Java platform.
+ Starview Inc. developed the language as an integral
+ part of the Starview Enterprise Platform, a framework
+ for real-time business applications such as factory
+ scheduling and data analytics. Star borrows from many
+ languages, with obvious heritage from Haskell, ML,
+ and April, but also contains new approaches to some
+ design aspects, such as syntax and syntactic
+ extensibility, actors, and queries. Its texture is
+ quite different from that of other languages on the
+ Java platform. Despite this, the learning curve for
+ Java programmers is surprisingly shallow. The
+ combination of a powerful type system (which includes
+ type inference, constrained polymorphism, and
+ existentials) and syntactic extensibility make the
+ Star well-suited to producing embedded
+ domain-specific languages. This paper gives an
+ overview of the language, and reports on some aspects
+ of its design process, on our experience on using it
+ in industrial projects, and on our experience
+ implementing Star on the Java platform.},
+}
+
+@inproceedings{Burmako13,
+ author = {Eugene Burmako},
+ booktitle = {Workshop on Scala},
+ key = {SNAPL'13},
+ pages = {3:1--3:10},
+ title = {Scala macros: let our powers combine!: on how rich
+ syntax and static types work with metaprogramming},
+ year = {2013},
+ abstract = {Compile-time metaprogramming has been proven
+ immensely useful enabling programming techniques such
+ as language virtualization, embedding of external
+ domain-specific languages, self-optimization, and
+ boilerplate generation among many others. In the
+ recent production release of Scala 2.10 we have
+ introduced macros, an experimental facility which
+ gives its users compile-time metaprogramming powers.
+ Alongside of the mainline release of Scala Macros, we
+ have also introduced other macro flavors, which
+ provide their users with different interfaces and
+ capabilities for interacting with the Scala compiler.
+ In this paper, we show how the rich syntax and static
+ types of Scala synergize with macros, through a
+ number of real case studies using our macros (some of
+ which are production systems) such as language
+ virtualization, type providers, materialization of
+ type class instances, type-level programming, and
+ embedding of external DSLs. We explore how macros
+ enable new and unique ways to use pre-existing
+ language features such as implicits, dynamics,
+ annotations, string interpolation and others, showing
+ along the way how these synergies open up new ways of
+ dealing with software development challenges.},
+}
+
+@misc{de2003camlp4,
+ author = {de Rauglaudre, Daniel},
+ title = {Camlp4 reference manual},
+ year = {2003},
+ url = {https://github.com/ocaml/camlp4},
+}
+
+@misc{Coq00,
+ author = {Gérard P. Huet and Christine Paulin-Mohring and
+ others},
+ howpublished = {Part of the Coq system version 6.3.1},
+ month = may,
+ title = {The {Coq} Proof Assistant Reference Manual},
+ year = {2000},
+}
+
+@article{Brady13,
+ author = {Edwin Brady},
+ journal = {Journal of Functional Programming},
+ number = {5},
+ pages = {552-593},
+ title = {Idris, a general-purpose dependently typed
+ programming language: Design and implementation},
+ volume = {23},
+ year = {2013},
+ abstract = {Many components of a dependently typed programming
+ language are by now well understood, for example, the
+ underlying type theory, type checking, unification
+ and evaluation. How to combine these components into
+ a realistic and usable high-level language is,
+ however, folklore, discovered anew by successive
+ language implementors. In this paper, I describe the
+ implementation of Idris, a new dependently typed
+ functional programming language. Idris is intended to
+ be a general-purpose programming language and as such
+ provides high-level concepts such as implicit syntax,
+ type classes and do notation. I describe the
+ high-level language and the underlying type theory,
+ and present a tactic-based method for elaborating
+ concrete high-level syntax with implicit arguments
+ and type classes into a fully explicit type theory.
+ Furthermore, I show how this method facilitates the
+ implementation of new high-level language
+ constructs.},
+ url = {https://eb.host.cs.st-andrews.ac.uk/drafts/impldtp.pdf},
+}
+
+@inproceedings{Swamy16,
+ author = {Nikhil Swamy and Cătălin Hriţcu and Chantal Keller and
+ Aseem Rastogi and Antoine Delignat-Lavaud and
+ Simon Forest and Karthikeyan Bhargavan and
+ Cédric Fournet and Pierre-Yves Strub and
+ Markulf Kohlweiss and Jean-Karim Zinzindohoue and
+ Santiago Zanella-Béguelin},
+ booktitle = {Symposium on Principles of Programming Languages},
+ key = {POPL'16},
+ month = jan,
+ pages = {256-270},
+ publisher = {ACM Press},
+ title = {Dependent Types and Multi-monadic Effects in {F*}},
+ year = {2016},
+ abstract = {We present a new, completely redesigned, version of
+ F*, a language that works both as a proof assistant
+ as well as a general-purpose, verification-oriented,
+ effectful programming language. In support of these
+ complementary roles, F* is a dependently typed,
+ higher-order, call-by-value language with _primitive_
+ effects including state, exceptions, divergence and
+ IO. Although primitive, programmers choose the
+ granularity at which to specify effects by equipping
+ each effect with a monadic, predicate transformer
+ semantics. F* uses this to efficiently compute
+ weakest preconditions and discharges the resulting
+ proof obligations using a combination of SMT solving
+ and manual proofs. Isolated from the effects, the
+ core of F* is a language of pure functions used to
+ write specifications and proof terms---its
+ consistency is maintained by a semantic termination
+ check based on a well-founded order. We evaluate our
+ design on more than 55,000 lines of F* we have
+ authored in the last year, focusing on three main
+ case studies. Showcasing its use as a general-purpose
+ programming language, F* is programmed (but not
+ verified) in F*, and bootstraps in both OCaml and F#.
+ Our experience confirms F*'s pay-as-you-go cost
+ model: writing idiomatic ML-like code with no finer
+ specifications imposes no user burden. As a
+ verification-oriented language, our most significant
+ evaluation of F* is in verifying several key modules
+ in an implementation of the TLS-1.2 protocol
+ standard. For the modules we considered, we are able
+ to prove more properties, with fewer annotations
+ using F* than in a prior verified implementation of
+ TLS-1.2. Finally, as a proof assistant, we discuss
+ our use of F* in mechanizing the metatheory of a
+ range of lambda calculi, starting from the simply
+ typed lambda calculus to System F-omega and even
+ micro-F*, a sizeable fragment of F* itself---these
+ proofs make essential use of F*'s flexible
+ combination of SMT automation and constructive
+ proofs, enabling a tactic-free style of programming
+ and proving at a relatively large scale.},
+ url = {https://dl.acm.org/citation.cfm?id=2837655},
+}
+
+@inproceedings{Bove09,
+ author = {Ana Bove and Peter Dybjer and Ulf Norell},
+ booktitle = {International Conference on Theorem Proving in
+ Higher-Order Logics},
+ key = {TPHOLs09},
+ month = aug,
+ pages = {73-78},
+ series = {Lecture Notes in Computer Science},
+ title = {A Brief Overview of {A}gda – A Functional Language
+ with Dependent Types},
+ volume = {5674},
+ year = {2009},
+ abstract = {We give an overview of Agda, the latest in a series
+ of dependently typed programming languages developed
+ in Gothenburg. Agda is based on Martin-Löf’s
+ intuitionistic type theory but extends it with
+ numerous programming language features. It supports a
+ wide range of inductive data types, including
+ inductive families and inductive-recursive types,
+ with associated flexible pattern-matching. Unlike
+ other proof assistants, Agda is not tactic-based.
+ Instead it has an Emacs-based interface which allows
+ programming by gradual refinement of incomplete
+ type-correct terms.},
+ url = {http://www.cse.chalmers.se/\~{}ulfn/papers/tphols09/
+ tutorial.pdf},
+}
+
+@proceedings{FOSSACS08,
+ address = {Budapest, Hungary},
+ booktitle = {Conference on Foundations of Software Science and
+ Computation Structures},
+ month = apr,
+ series = {Lecture Notes in Computer Science},
+ title = {Conference on Foundations of Software Science and
+ Computation Structures},
+ volume = {4962},
+ year = {2008},
+}
+
+@proceedings{TPHOLs09,
+ booktitle = {International Conference on Theorem Proving in
+ Higher-Order Logics},
+ key = {TPHOLs09},
+ month = aug,
+ series = {Lecture Notes in Computer Science},
+ title = {International Conference on Theorem Proving in
+ Higher-Order Logics},
+ volume = {5674},
+ year = {2009},
+}
+
+@proceedings{SCALA13,
+ booktitle = {Workshop on Scala},
+ key = {SNAPL'13},
+ title = {Workshop on Scala},
+ year = {2013},
+}
+
+@proceedings{ICFP16,
+ booktitle = {International Conference on Functional Programming},
+ key = {ICFP'16},
+ month = sep,
+ title = {International Conference on Functional Programming},
+ year = {2016},
+}
+
+@proceedings{IFL08,
+ booktitle = {Implementation and Application of Functional
+ Languages},
+ key = {IFL'08},
+ month = sep,
+ series = {Lecture Notes in Computer Science},
+ title = {Implementation and Application of Functional
+ Languages},
+ volume = {6836},
+ year = {2008},
+}
+
+@proceedings{PPPJ13,
+ booktitle = {International Conference on Principles and Practices
+ of Programming on the Java Platform},
+ key = {PPPJ'13},
+ month = sep,
+ title = {International Conference on Principles and Practices
+ of Programming on the Java Platform},
+ year = {2013},
+}
+
+@proceedings{CADE99,
+ booktitle = {International Conference on Automated Deduction},
+ key = {CADE-16},
+ month = jul,
+ series = {Lecture Notes in Artificial Intelligence},
+ title = {International Conference on Automated Deduction},
+ volume = {1632},
+ year = {1999},
+}
+
+@proceedings{GPCE12,
+ booktitle = {Proceedings of Generative Programming and Component
+ Engineering},
+ key = {GPCE'12},
+ title = {Proceedings of Generative Programming and Component
+ Engineering},
+ year = {2012},
+}
+
+@proceedings{Haskell02,
+ address = {Pittsburgh, Pennsylvania},
+ booktitle = {Haskell Workshop},
+ month = oct,
+ publisher = {ACM Press},
+ title = {Haskell Workshop},
+ year = {2002},
+}
+
+@proceedings{LFM04,
+ address = {Cork},
+ booktitle = {Logical Frameworks and Meta-Languages},
+ month = jul,
+ title = {Logical Frameworks and Meta-Languages},
+ year = {2004},
+}
+
+@proceedings{POPL16,
+ booktitle = {Symposium on Principles of Programming Languages},
+ key = {POPL'16},
+ month = jan,
+ publisher = {ACM Press},
+ title = {Symposium on Principles of Programming Languages},
+ year = {2016},
+}
+
View it on GitLab: https://gitlab.com/monnier/typer/commit/e8ea93048a7a0edf1ef6a2e188e1f75df9f…
--
View it on GitLab: https://gitlab.com/monnier/typer/commit/e8ea93048a7a0edf1ef6a2e188e1f75df9f…
You're receiving this email because of your account on gitlab.com.