I started to look at what let-polymorphism should look like and I have a bit of a problem. Hopefully someone here can give me some ideas or at least opinions.
*** Case 1 (easy): we want to be able to write
map f xs = case xs | nil => [] | cons x xs => cons (f x) (map f xs);
and that should work fairly easily: after inference we get 2 uninstantiated metavars so we add the corresponding 2 lambdas around the whole definition and we're done.
*** Case 2 (easy):
map : (?a -> ?b) -> List ?a -> List ?b;
Here, similarly, the ?a and ?b metavars are left uninstantiated, so we can add the corresponding 2 pis around the type and we're done.
*** Case 3: We could also allow
map : (a -> b) -> List a -> List b;
and automatically treat a reference to an non-existing variable (in a type annotation) as a metavar.
*** Case 4: now comes the tricky part. what about case 1 combined with a type annotation?
map : (a : Type) ≡> (b : Type) ≡> (a -> b) -> List a -> List b; map f xs = case xs | nil => [] | cons x xs => cons (f x) (map f xs);
[ Of course, the type annotation could be shortened as in case 2 and case 3 without making much difference to the situation. ]
Option 1: We could do it by treating the map definition as in case 1 and then checking that the inferred type matches the annotation, but: - in general this will fail because the two types may be "equivalent" yet different (e.g. the order of a and b may be reversed). - it means that we elaborate the definition of `map` without taking much advantage of the type annotation. It would allow polymorphic recursion, but while elaborating map, we wouldn't yet know that there are args `a` and `b` in the environment. In the present case, this probably doesn't matter much, but for type-class arguments this could be more annoying.
Option 2: we could instead automatically wrap the definition of map into 2 erasable lambdas based on the type annotation. This is more natural from a "bidirectional type checking" point of view, takes full advantage of the type annotation, and doesn't suffer from having to later check the type we inferred (with the risk that we didn't infer "quite right").
*** Case 5: sinking deeper.
Now, what about
map : (a : Type) ≡> (b : Type) ≡> (a -> b) -> List a -> List b; map = lambda a ≡> lambda b ≡> lambda f -> lambda xs -> case xs | nil => [] | cons x xs => cons (f x) (map f xs);
This is currently the code we accept, and I think it'd be good to still accept it, but if we go with option 2 in case 4, what should we do here? Before we even look at the "lambda a ≡> lam..." code, we'd end up adding two lambdas around the whole thing, but these would be extraneous here. We could try to arrange so that the "automatic instantiation" (which adds implict args wherever we think they're needed) could save us, by basically auto-rewriting the above as:
map : (a : Type) ≡> (b : Type) ≡> (a -> b) -> List a -> List b; map = (lambda a ≡> lambda b ≡> lambda f -> lambda xs -> case xs | nil => [] | cons x xs => cons (f x) (map f xs)) (a := a) (b := b))
but it feels brittle, and seems wrong: why uselessly add lambdas and then matching applications, when the code came already "written in full"?
[ Side note: notice how in the definition above, we use `a` and `b` which look like free variables but are really bound by the auto-added lambdas. ]
One could think: only auto-add the lambdas if the body doesn't already start with such lambdas, but the thing is that the lambdas need to be added before we elaborate the definition (because the lambdas influence the ctx in which the elaboration takes place), so we can't really look at the definition to make a decision: for all we know, the definition has at its top-level a macro-call which needs to look at the ctx to decide what to do.
I think maybe a good solution is to try and provide two forms of `=` definitions: a basic one that doesn't do any magic, and another one that auto-adds erasable lambdas based on provided type annotations. Hopefully the fancier one can be defined as a declaration-macro.
Still, this "auto-adding lambdas" is pretty intrusive. It means that definitions with a type annotations are handled very differently than those without. And that the type annotation already provides part of the definition.
Another problem is that this only works for "prefix args". If you consider the definition of K:
K : (a : Type) ≡> a -> (b : Type) ≡> b -> a; K = ...
only the `a` argument can be easily auto-added as part of the declaration. To auto-add the `b` argument, we'd need to auto-add lambdas not just around definitions, but around arbitrary expressions: this wouldn't be just let-bound generalization any more but it would require some generic "add implicit lambdas wherever needed". We know from MLF and other type-inference systems that such things can be done in some cases, but - it's tricky - it's usually based on comparing the expression's inferred type with what the context expects, but that means the lambda can only be added after the fact (i.e. this corresponds again to option 1 rather than option 2).
This becomes even more problematic for implicit args (rather than merely erasable args): erasable args typically correspond to System-F style polymorphism and can be usually fully inferred and to a large extent they don't matter, but implicit args are real arguments (think type-class dictionaries) so they have an impact on performance, which means that we don't want to risk adding spurious "lambdas + corresponding calls" since they may not always turn out to be easily optimizable via eta-reduction.
Maybe the best option is to only do the auto-add-lambdas for the case where the definition is of the form "f <args> = <exp>" and in that case we can even auto-add intermediate implicit/erasable args, so we could accept:
K : (a : Type) ≡> a -> (b : Type) ≡> b -> a; K x y = x;
Hmm...
Stefan