Stefan pushed to branch report/itd at Stefan / Typer
Commits: 9cc26153 by Stefan Monnier at 2018-10-15T21:51:30Z -
- - - - -
1 changed file:
- paper.tex
Changes:
===================================== paper.tex ===================================== @@ -168,26 +168,25 @@ Typer is a functional language based on a pure type system, in the tradition of Coq~\cite{Coq00}, Lean, and Agda~\cite{Bove09}, but focusing on programs more than proofs, like Idris~\cite{Brady13}, F-star~\cite{Swamy16}, -Zombie~\cite{Casinghino14}, and many others before. Its design follows that -of Scheme, in the sense that it intends to provide a minimalist core -language on top of which a nice language can be built by metaprogramming. +Zombie~\cite{Casinghino14}, and many others. Its design follows that of +Scheme, in the sense that it intends to provide a minimalist core language +on top of which a nice surface language can be built by metaprogramming.
So the focus of Typer's design is on providing a good core language which is -the target of the metaprogramming facilities. Some of the design goal of +the target of the metaprogramming facilities. Some of the design goals of this language are: \begin{itemize} \item We want this core language to be usable both to write proofs and to write programs. -\item We want an economy of concepts, in other words a simple and - clean language. This is desirable not only for aesthetic reasons, but - also for pragmatic reasons such as making the soundness proof - hopefully simpler. +\item We want an economy of concepts, in other words a simple language with + orthogonal features. We want this both for aesthetic and + pragmatic reasons. \item High-level enough to be convenient to build on top of it. \item Yet we also want it to be low-level so the language itself does not impose unneeded inefficiencies which the compiler then needs to eliminate. \item A reasonably efficient implementation shouldn't require excessive efforts. \end{itemize} -The Calculus of Constructions satisfies the first two points above, but +The Calculus of Constructions (CoC) satisfies the first two points above, but falls short on the efficiency side when it comes to representing data structures. The Calculus of Inductive Constructions (CIC)~\cite{Paulin93} solves most of those issues, especially in the form presented @@ -199,11 +198,11 @@ Yet, early experience with it made us feel that it was still a bit too high-level, introducing inefficiencies in some places. The main problem appears in code that wants to manipulate tuples: while defining tuples as single-constructor datatypes is not really problematic in terms of type -definition or tuple construction, it becomes annoying when the time comes to -get data out those tuples: every field access becomes a \kw{case} statement -with a single branch that extracts each and every field of the tuple even if -only a single field is needed. So a simple field selection becomes an -operation of size proportional to the tuple's size. +definition or tuple construction, it becomes annoying when extracting data +out of those tuples: every field access becomes a \kw{case} statement with +a single branch that extracts each and every field of the tuple even if only +a single field is needed. So a simple field selection becomes an operation +of size proportional to the tuple's size.
We could easily solve this by providing additional ad-hoc support for tuples, but that goes against our ideal of a minimal core language since we @@ -231,7 +230,7 @@ The contributions of this article are: \begin{itemize} \item The CUC language, which provides separately sum types, recursive types, and tuple types, and where they can be combined without loss of - efficiency to provide the usual functionally traditionally provided by + efficiency to provide the usual functionality traditionally provided by algebraic data types or inductive types, making it better suited as a compiler intermediate language. \item A kind of case-analysis construct where the default branch also gets @@ -247,13 +246,13 @@ In this section, we briefly present the two problems our design aims to address.
\subsection{Native tuples}
-Proving algebraic data types and tuples without overlap is not a new -problem, and Standard ML~\cite{Milner97} solved it years ago by restricting -its datatype constructors to carry exactly one element, no more no less. -In other words, SML's datatype only provides sum types and recursive types, -and tuples are provided separately. While it is elegant, this solution -suffers from an inefficiency we wanted to avoid. For example, with -a datatype like: +Providing algebraic data types and tuples without overlap is not a new +problem. For example, Standard ML~\cite{Milner97} solved it years ago by +restricting its datatype constructors to carry exactly one element, no more +no less. In other words, SML's datatype only provides sum types and +recursive types, and tuples are provided separately. While it is elegant, +this solution suffers from an inefficiency we wanted to avoid. For example, +with a datatype like: \begin{verbatim} datatype 'a list = | cons of 'a * 'a list @@ -265,6 +264,7 @@ containing \texttt{cons(<pointer>)}. A sufficiently smart compiler may be able to eliminate this indirection, of course, but it can be surprisingly complicated~\cite{Shao97,Leroy92,Leroy97}.
+%% FIXME: Example/description is hard to follow! Another approach is to let the user manipulate tags explicitly, so we can have code like: \begin{verbatim} @@ -272,13 +272,13 @@ have code like: field1 : f1 field0; field2 : f2 field0} \end{verbatim} -So the type of the fields depends on the tag stored in the first field, and +where the type of the fields depends on the tag stored in the first field, and after a case analysis on \texttt{e.field0} the type of the subsequent fields becomes known. This approach can indeed be used to solve our problem but we rejected it for the following reason: by making tags first class, they become more expensive, for example because it is then difficult to combine them into the object header that is required for the needs of the memory -management, for example. Basically, the language becomes too low-level, too +management. Basically, the language becomes too low-level, too close to machine language, tying the hands of the compiler too tightly for our needs.
@@ -320,27 +320,27 @@ When performing case analysis in Coq and other languages of the family, the default branch does not get any typing refinement. More specifically, let's consider the following example where \texttt{e} is assumed to be a list: \begin{verbatim} - match e with - | nil => exp1 - | _ => exp2 + match E with + | nil => EXP1 + | _ => EXP2 \end{verbatim} -The typing of \texttt{exp1} can take advantage of the fact the we know -\texttt{e} was found to be equal to \texttt{nil}, but the typing of -\texttt{exp2} has no such benefit: it cannot take advantage of the fact that -we have found \texttt{e} to be different from \texttt{nil}. +The typing of \texttt{EXP1} can take advantage of the fact the we know +\texttt{E} was found to be equal to \texttt{nil}, but the typing of +\texttt{EXP2} has no such benefit: it cannot take advantage of the fact that +we have found \texttt{E} to be different from \texttt{nil}.
It would not be difficult to change Coq such that default branches get additional typing information, either providing them with a proof that the match target is different from all the mentioned branches (i.e.~a proof that -``\texttt{not (e = nil)}'' in the above example), or providing them with +``\texttt{not (E = nil)}'' in the above example), or providing them with a proof that the match target is equal to one of the remaining possibilities -(i.e.~a proof that ``\texttt{∃x,y. e = cons x y}'' in the above +(i.e.~a proof that ``\texttt{∃x,y. E = cons x y}'' in the above example). The problem here is that those additional proofs would tend to grow fairly large, imposing a cost that is difficult to justify since experience shows that such a refined information is not often useful.
This last point argues that this is not an important problem to solve, and -we partly agree: it was not the primary motivation for our design. Still: our +we partly agree: it was not the primary motivation for our design. Yet, our design provides us with that kind of refinement at a much lower cost, making it practical to provide this feature even if it is not used very often.
@@ -407,7 +407,7 @@ it practical to provide this feature even if it is not used very often. \end{figure}
Figure~\ref{fig:ccw} shows our base language \CCw{} as a pure type system -(PTS)~\cite{Barendregt91b}. It is a variant of CC with a tower of universes +(PTS)~\cite{Barendregt91b}. It is a variant of CoC with a tower of universes à la ECC~\cite{Luo89}.
While inductive types have non-trivial interactions with impredicativity,
View it on GitLab: https://gitlab.com/monnier/typer/commit/9cc261534bdaa2f33731768f755283f35a31...
Afficher les réponses par date