Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Homotopy Type Theory and Higher Inductive Types (science4all.org)
119 points by tw1010 on Jan 7, 2018 | hide | past | favorite | 31 comments


This articly is quite hard to read and follow. I have some background in math but can't follow the logic in the article because author is constantly jumping from one topic to anonther. Artificial “questions” from imaginary layman are not helping, instead they confuse and dilute attention.


The questions are not the problem. They are a variation of the Socratic method and are super useful - when used correctly - for checking your understanding, and following where the author is going to go next.

The problem is that the text is full of grammar, spelling, and punctuation errors. There are missing words, confusingly run on sentences, etc. It's an unnatural read because it hasn't had an editor.


I found it quite straightforward. There isn't any "jumping around".


I really wish somebody could explain the Univalence axiom to me. Without assuming that I know category theory or algebraic topology. You know, just like you can explain the axioms of set theory pretty easily. I mean, if it is a foundation, there should be an easy way to explain it. And not in metaphors, please, but in a precise way.


At a very high level the univalence axiom says that set operations may not look at the elements of a set. Consider the cartesian product versus set union. The cartesian product does not look at the elements of the sets, but the union does. If A and B share an element then this element will appear only once in A union B, so the number of elements in the union depends on the actual elements in A and B. In contrast, the number of elements in A x B only depends on the number of elements in A and B. The cartesian product does not care about the internals of elements themselves. With infinite sets it's not clear what number of elements means. Mathematicians say that the number of elements in A and B is the same if there is a way to pair each element in A with an unique element in B. The condition that set operations don't depend on the elements in the set can then be expressed as follows. If there is a pairing of elements between A and A' and between B and B' then there is a pairing between op(A,B) and op(A',B'). The univalence axiom basically says that all set operations in HoTT are of this type.

To connect with the conventional terminology: we call A and A' equivalent if there exists such a pairing. Equality is the relation that all operations must respect: if A = A' and B = B' then op(A,B) = op(A',B'). The univalence axiom says that equality of sets is equivalence of sets.


Given an equivalence between (all the values of) two types, you can assume the types are equal, i.e. substitute one for the other in any value, no matter how complex (including functions on values/types etc.).

The value/function conversion is called a "transport" and it actually depends on which equality you've chosen (called "paths" in HoTT).

E.g. bool <-> bit can map false => 0, true => 1 or false => 1, true => 0.

So `(a: bool, b: bool) => a && b` can be "transported" to `(a: bit, b: bit) => a & b` or to `(a: bit, b: bit) => !(!a & !b)` (which is `a | b`).

Of course, bit tricks aren't that useful, but two types which are defined differently (e.g. from different libraries, or different versions of the same library), yet contain the same information, would be interchangeable given a "path" (in the case of the Univalence Axiom, a proof of equivalence).

The other important addition in HoTT is defining non-trivial "paths" (equalities) between values of a type when you define the type - this is used in the HoTT book to describe integer, rational and real numbers, in a way reminiscent of quotient sets.


I doubt that it can be explained in less than fifteen pages.

https://math.uchicago.edu/~may/REU2015/REUPapers/Macor.pdf


Well, it is a start. Fourteen pages anyone? Once we are down to five pages I'll give it a(nother) shot.


In example 2.2, to distinguish between an element and a function a better example is to consider a constant such as 7 can be an element of R or a constant function. With type theory notation, 1:R versus \x:R.1, another important point is that you have to specify the arity of the function, so that 1:R, \(x:R).1, \(x:R,y:R).1, \(x:C,y:C).1 are four different types, the first a constant, the second a constant function of one real variable, and the last a function of arity two whose variable type is the C the complex field.

I'm reading page four, perhaps type theory can distinguish among pure function, lazy functions, side-effect functions and other types of functions that are implemented by a procedure which can be lazy or not. So pi:R,(\x:R.pi) and (\x:Haskell-Action()).pi in which the last is a non pure lazy program for computing all digits of pi, are two different types, that is they belong to different universes.


you should read it, it's an easy one, at least it's how I feel after reading 3 pages (usually I stop long before that)


It's not bad, but man the way the author throws around multivariate (e.g. non-curried) functions all over the place is sooo confusing. I remember reading a λ-calculus book that would do that and it's really not fun to try and parse (see the bottom of pg. 7).

But I will say it's very cool to see how HoTT derives all the basic tools you'd use in propositional calculus (A: A -> A, :(A -> (B -> C)) -> ((A->B) -> (A->C)), and so on).


An isomorphism is a structural correspondence between two objects. In HoTT a type's structure is its points, paths between points, paths between paths... etc. Note that "path" is an abstract idea: it's a construction parameterized on two points that is reflexive, symmetric, and transitive.

Now, the whole universe of types itself has a path structure.

Univalence says that isomorphisms of types are the same as paths between types.

In a way it bridges the gap between structural isomorphism and Leibniz equality.


The bird's eye overview of the univalence axiom is as follows.

In homotopy type theory, we have in addition to familiar types like the type of natural numbers or the type of lists of strings the "universe type U". Its inhabitants are the types themselves.

Just as we write "5 : Int", we can also write "Int : U".

The univalence axiom specifies when two inhabitants of U should be deemed equal. The remaining axioms of type theory don't settle this question.


Can you write "U : _"? If so, what fills in the blank? It seems to parallel "Value :: Type", "Type :: Kind", and "Kind :: Sort" from FP.


Indeed, homotopy type theory employs a stratified approach to this question.

That is, what I've written "U" would more formally be written "U_0". We then have U_0 : U_1, U_1 : U_2, and so on. All the base types are inhabitans of U_0, that is we have Int : U_0, String : U_0, and so on.

In functional programming parlance, we'd have U_0 = Type, U_1 = Kind, U_2 = Sort.

There's a thing called "typical ambiguity" which allows us to drop the index in most cases (and let it be automatically inferred).


Thanks. HTT sounds interesting. I'll have to investigate further.


Computerphile did some videos on voevodsky and homotopy type theory: https://www.youtube.com/watch?v=v5a5BYZwRx8

(watch those eyes! I like how they bug out when he gets really excited.)


> I mean, when you actually use integers, do you really picture them as equivalence classes of the Cartesian square of natural numbers?

Yes, accountants really do think of integers this way: these are assets and liabilities.


https://www.cl.cam.ac.uk/~sd601/thesis.pdf might provide useful background reading for those who aren’t familiar with type theories.


In 2013, a consortium of the greatest mathematicians published a massive volume which reboots our conception of mathematics.

Ok, here's a rant by a frustrated student who's spent way too much time drinking the cool-aid. None of the authors, except for Voevodsky, is widely known among professional mathematicians. And Voevodsky is not known because of his work on HoTT. In fact, most mathematicians regard homotopy type theory as some obscure stuff logicians and computer scientists do, with no relevance to their research. And they're most certainly right.

"greatest mathematicians". Nobody would call a mathematician "great" for coming up with a definition. (No, Grothendieck is not an example, because he actually put his definitions to use and solved hard problems.) Yet there are no interesting theorems proved via the univalence axiom which weren't known before. Thus, the "greatest" mathematical achievement of the HoTT community seems to have been coming up with the univalence axiom. Everybody can come up with random axioms. Thus, I'd say their greatest accomplishment is not actually mathematical but social, because they managed to market this "breakthrough" the way it has been.

The whole process of how type theory research seems to work is completely absurd. Type theorists first create a syntax and later come up with a semantics for it. In other words, they first think about how they want to say something before they even know what to say. For example, the semantics of dependent Per Martin-Löf type theory were developed only 10-15 years after Martin-Löf came up with his syntax. A few years ago, somebody finally published a proof that the semantics of extensional Martin-Löf type theory is equivalent to locally cartesian closed categories (lccc's). It's not hard to understand what an lccc is, and state their defining operations/axioms. lcccs pop up everywhere. The concept of dependent types, on the other hand, has not existed in mathematics for thousands of years, seemingly without causing any harm. Nobody gained any interesting new insights by understanding dependent types, to my knowledge. Why, exactly, is this awkward axiomatization of lcccs interesting?

And the same happens for intensional type theory with the univalence axiom. It's been blurted out for years that this syntax's semantics should be equivalent to infinity lcccs. Yet there is still no proof, no evidence that HoTT is the easiest axiomatization or the easiest to work with, and a proof, if it exists, will likely be extremely convoluted because it has to deal with all the peculiarities of a syntax that was created without having infinity categories in mind. A language that makes it hard to understand what it means has failed.

If you don't believe me (and there is now reason why you should), please read this thread [1]. Lurie, the expert on infinity categories (well, there are others, but he's written the canonical tome), debates with type theorists about the usefulness of HoTT. I believe it ends with Lurie not seeing any usefulness and calling it a day.

[1] https://mathematicswithoutapologies.wordpress.com/2015/05/13...


> The whole process of how type theory research seems to work is completely absurd. Type theorists first create a syntax and later come up with a semantics for it. In other words, they first think about how they want to say something before they even know what to say.

Well no, they know sort of what they have to say. Type theory is syntax-directed because this is what ensures it's mechanically automated and doesn't require any intelligence to verify.

> The concept of dependent types, on the other hand, has not existed in mathematics for thousands of years, seemingly without causing any harm.

That's what some thought about Russell's basic theory of types before the inconsistencies of set theory were properly acknowledged. And now type systems are in every major programming language that drive trillion dollar industries.

> Nobody gained any interesting new insights by understanding dependent types, to my knowledge.

What kind of insights? There are plenty of great industry and computer science insights, even if there have been no or few abstract mathematical insights yet. Also, there's a growing push to use automated theorem proving tools like Coq in mathematics because of the growing complexity of this field.

Finally, your argument that a difficult proof hasn't been found in the handful of years since HoTT began, therefore HoTT is probably invalid or useless, is frankly asinine.


Can't reply to everything here. I don't know to which extent Russel's type system influenced modern-day type systems in programming languages.

Well no, they know sort of what they have to say.

Yes, I agree. But I'd expect more from people who literally want to make formal, exact proofs viable.

Type theory is syntax-directed because this is what ensures it's mechanically automated and doesn't require any intelligence to verify.

Validity of terms in extensional dependent type theory is not decidable, but that doesn't stop people from studying this system. They could just as well study free locally cartesian closed category, but I guess it would be too obviously trivial.

Also, there's a growing push to use automated theorem proving tools like Coq in mathematics because of the growing complexity of this field.

Err... There are very few mathematicians who want to spend significant effort on this. And one of my points is that there is no reason to believe that a system based on intensional dependent type theory (e.g. Coq) is the most suitable one, except that it already exists.

Finally, your argument that a difficult proof hasn't been found in the handful of years since HoTT began, therefore HoTT is probably invalid or useless, is frankly asinine.

Why thank you. HoTT has existed for 10 years now, Martin-Löf type theory for 40, and experts in the fields about to be revolutionized are very reserved. If you think that not expecting a revolution here is asinine, I don't think we can come to an agreement.


> There are very few mathematicians who want to spend significant effort on this.

Because the tools aren't quite there yet.

> And one of my points is that there is no reason to believe that a system based on intensional dependent type theory (e.g. Coq) is the most suitable one, except that it already exists.

No one has claimed otherwise.

> If you think that not expecting a revolution here is asinine

That's not what I said. You effectively claimed that the fact that a difficult proof had not yet been found, despite many believing the equivalence, somehow entails that it won't be found. That's the asinine argument. I'm sure you're very well aware of proofs that evaded mathematicians for millenia.


You raise a couple of important points. I agree that "a consortium of the greatest mathematicians" is a bit of a hyperbole, even though many of the contributors to the HoTT book are extremely well-known in their areas of research, eaching listing numerous worthwhile contributions. Let me react to some of your statements more precisely.

"In fact, most mathematicians regard homotopy type theory as some obscure stuff logicians and computer scientists do, with no relevance to their research." For the most part, any two mathematicians working on different subjects won't believe that each other's work is relevant to them.

"Everybody can come up with random axioms." This is of course true, on face value, but the point of the univalence axiom is that it is firstly an extremely interesting axiom, secondly that it's an eminently practical axiom (vastly simplifying formal proofs in some areas), and thirdly that it's actually fulfilled in several models.

"The concept of dependent types, on the other hand, has not existed in mathematics for thousands of years, seemingly without causing any harm." While dependent types were only made explicit in Martin-Löf's type theory in the 1970s, dependent types pop up everywhere. For instance the type of vectors of length n is a dependent type (dependent on the value n).

"Why, exactly, is this awkward axiomatization of lcccs interesting?" It's because it's so simple to work syntactically. Just as informal proofs (written in English or a different human language) can be (in principle) be translated to formal proofs in ZFC or a different set theory, with a small bit of training you can recognize when an informal proof can be translated to extensional Martin-Löf type theory. In this way a single piece of informal proof gives rise to a multitude of theorems, one for each lccc.

"and a proof, if it exists, will likely be extremely convoluted" Partly, this is exactly the point! The language of homotopy type theory makes it easy to work with infinity categories. By now the correspondence has been checked in many cases. In particular, the correspondence has always been known for the model of simplicial sets (which was, in fact, Voevodsky's starting point). Thus from the very beginning, we were able to use HoTT as a convenient synthetic language for doing algebraic topology.

The thread you linked to is full of interesting food for thought. Be sure to also check out Mike Shulman's comments there! I'm happy to discuss this further. :-)


Hi Ingo, I'm pleasantly surprised to see an actual researcher here.

For the most part, any two mathematicians working on different subjects won't believe that each other's work is relevant to them.

This is somewhat true, but I think reasearch in type theory is connected to that in other branches of mathematics (if you want to call type theory maths) very weakly in comparison. There are some connections to category theory, but they are far to weak given that type theory should essentially be a branch of categorical logic -- the one that deals with the initial so-and-so category and its computational properties.

The point of the univalence axiom is that it is firstly an extremely interesting axiom

I agree that the underlying ideas are interesting. But I think it's much clearer to see what's going on if you know what an object classifier is. Can't say much about the rest. I mean, yeah, it's of course true that the univalence axiom is not just a randomly generated formula, and people have thought hard about it.

For instance the type of vectors of length n is a dependent type (dependent on the value n)."

What do you gain in comparison to "a vector of length n is a vector such that its length is equal to n"?

"Why, exactly, is this awkward axiomatization of lcccs interesting?" It's because it's so simple to work syntactically. Just as informal proofs (written in English or a different human language) can be (in principle) be translated to formal proofs in ZFC or a different set theory, with a small bit of training you can recognize when an informal proof can be translated to extensional Martin-Löf type theory. In this way a single piece of informal proof gives rise to a multitude of theorems, one for each lccc.

I agree with your last sentence strongly, but I don't think you need type theory to do that. Why can't you translate your informal proof into any other, possibly more convenient or intuitive syntax instead? Wouldn't it be incredibly random if intensional type theory, which was not defined as a syntax for infinity lccc at all, be the best way to do this? At one point in the discussion I've linked to, Shulman explains the type theoretic proof for "the fundamental group of S^1 is Z", as presented in the HoTT book I believe. Lurie is of course not impressed, because the proof is also not hard to do for infinity toposes and basically the same if you know what an object classifier does and what the circle is. So to somebody like Lurie, formalizing things in type theory means translating his intuition to type theory and feeding it to a computer. If he read a theorem formalized in type theory, he'd have to translate it back to infinity category theory again.

"and a proof, if it exists, will likely be extremely convoluted" Partly, this is exactly the point! The language of homotopy type theory makes it easy to work with infinity categories. By now the correspondence has been checked in many cases. In particular, the correspondence has always been known for the model of simplicial sets (which was, in fact, Voevodsky's starting point). Thus from the very beginning, we were able to use HoTT as a convenient synthetic language for doing algebraic topology.

I agree that there may be value in finding more "rigid", equivalent ways to define infinity categories. For example, simplicially enriched categories instead of simplicial sets. But that intensional type theory is the right trade-off to be made (without making type checking impossible) would be extremely surprising to me.


I don't know anything about homotopy types, but I think that they are trying to build a system in which automated proofs can be computed more easily. So the appeal is for applying computer methods for proofs. Also it tries to use the language of homotopy to suggest that some ideas category theory and homotopy theory can be translated to see if they give a new perspective of old problems. I don't know if the road to computer proofs via homotopy types is going to give fruits or not, but I think it should be tested in that field: creating computer proofs for theorems. https://math.stackexchange.com/questions/1158383/examples-of...


Thanks for writing this so clearly. It also captures my personal view on the matter. I have not been involved in such research, but friends of mine have. So I have been looking at it on the side, concluding what you wrote.


If you have genuine curiosity about HoTT why not post to the mailing list?

https://groups.google.com/forum/#!forum/homotopytypetheory

You will get answers from real experts with updated views.


Thanks, but I think I've recovered from my curiosity.


Great rant. Spent a day or so reading [1] about two months ago and called it a day too :)


Introductory talk: "A Functional Programmer's Guide to Homotopy Type Theory" https://www.youtube.com/watch?v=caSOTjr1z18




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: