For the HN crowd who are generally programmers but not necessarily mathematicians, it’s more relevant to consider the programming side of things. There is a very good book (one I haven’t finished unfortunately) that covers Lean from a functional programming perspective rather than proving mathematics perspective: https://leanprover.github.io/functional_programming_in_lean/
I am learning Lean myself so forgive me as I have an overly rosy picture of it as a beginner. My current goal is to write and prove the kind of code normal programmers would write, such as real-world compression/decompression algorithms as in the recent lean-zip example: https://github.com/kiranandcode/lean-zip/blob/master/Zip/Nat...
I recall an experiment in proving software correct from the 1990s that found more errors in the final proof annotations than in the software it had proved correct.
Then, I foresee 2 other obstacles, 1 of which may disappear:
1. Actually knowing what the software is supposed to do is hard. Understanding what the users actually want to do and what the customers are paying to do --which aren't necessarily the same thing--is complex
2. The quality of the work of many software developers is abysmal and I don't know why they would be better at a truth language than they are at Java or some other language.
Objection 2 may disappear to be replaced with AI systems with the attention to do what needs to be done. So perhaps things will change in that to make it worthwhile...
I think the hope for 2 is that those programmers would be forced into inaction by the language safety, rather than being allowed to cause problems.
I don't really think that works either, because there's endless ways to add complication even if you can't worsen behavior (assuming that's even possible). At best they might be caught eventually... but anyone who has worked in a large tech company knows at least a few people who are somehow still employed years later, despite constant ineptitude. Play The Game well enough and it's probably always possible.
Re 1: Discussing and guiding the desirable theorems for general-purpose programs has been a major challenge for us. Proofs for their own sake (bad?) vs glorious general results (good but hard?). Actual human guidance there can be critical there at least for now.
Hmm like the “new” JS Fetch api with `then` chaining? What about map, filter, reduce? Anonymous functions? List comprehensions? FP is everywhere. Pure FP code isn’t seen very often, as side effects are necessary for most classes of programs, but neither is pure OOP code, as not everything is dynamically dispatched, nor imperative code, as Objects or functions may more cleanly describe/convey something in code.
I shoved math aside because I think for most of the HN crowd it wouldn’t be a good use of their time to do what mainstream mathematics is about, like the “things such as Grothendieck schemes and perfectoid spaces” the article also references. FP is much more relevant because for any program for which a proof of correctness is worthwhile, you can always extract a functional core of that program (functional core, imperative shell). And that functional core will be easier to prove than if it were written in an imperative style.
Everything in the math universal language is defined as an expression or formula.
All proofs are based on this concept.
To translate this into programming think about what programming is? Programming rather being a single line formula is a series of procedures.
1. add 1
2. add 3
3. repeat.
in functional programming you get rid of that and you think from the perspective of how much of a program can you fit into a single one liner? An expression? Think map, reduce, list comprehensions, etc.
That is essentially what functional programming is. Fitting your entire program onto one line OR fitting it into a math expression.
The reason why you see multiple lines in FP languages is because of aliasing.
m = b + c
y = x + m
is really:
y = x + (b + c)
This is also isomorphic to the concept of immutability. By making things immutable your just aliasing part of the one liner...
So functional programming, one line programs, formulas and equations in math, and immutability are essentially ALL the same concept.
That is why lean is functional. Because it's math.
People tell me Lean is really good for functional programming. However, coming from Agda, it feels like a pretty clunky downgrade. They also tell me it's good for tactics, but I've found Coq's tactics more powerful and ergonomic. Maybe these are all baby-duck perceptions. So far, it feels like Lean's main strength isn't being the best at anything, but being decent at everything and having a huge community. I see the point and appeal, but it's saddens me that a bit of the beauty and power are lost in exchange.
My perspective is that network effects are far less long-lasting than they feel in the moment. For example if being decent at everything and having a huge community was the only thing that mattered, Perl would still be a big deal. Many similar examples exist.
In the case of Lean, being the first with a huge library really makes a difference. Just as Perl got a big boost from having CPAN. (Which was an imitation of CTAN, except for a programming language instead of TeX.)
When your library is small, this looks like an insurmountable barrier. But you don't have to match the scale for factors of usability to become more important. And porting mathematical libraries is a good target for LLMs. The source is verified, the target is verifiable, and the reasoning path generally ports.
The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect. Because if their library can be ported to your platform, then your proof can probably be ported to their platform as well!
I think its pretty clear that being too early has been as bad as being too late for most technologies. There are a few that have gradually gained community after decades but it is easier to make a poor copy of one of them and have better momentum and less skepticism.
Isabelle/HOL as a language is nice, but the tooling has deep flaws even outside the pure desktop-first app approach.
The language is different (not necessarily better) in comparison to Lean, but I do agree with some of the points on dependent types. It seems both languages mostly just made different tradeoffs, which imo, were fair and have shaped them into quite efficient tools for their domains. The domain of "proofs" is large and different paradigms just have different strengths/weaknesses, Lean just specialized for a different part of this space.
Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean. It might also be nice to use for explorative phases but is a resource hog, it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
Working on Isabelle itself however is painful (especially communicating with developers) in comparison to Lean. Things like "we don't have bugs just unexpected behaviour" on the mailing list just seems childish/unprofessional. The callout to RAM consumption of Lean and related systems is also a bit of joke when looking at Isabelle's gluttony for RAM.
> but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
One issue with this is that coming up with a quickly-checkable certificate for UNSAT is not exactly a trivial problem. It's effectively the same as writing a formal proof.
> Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean.
I have no knowledge of what sledgehammer is. However...
> it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer"
The most important thing is keeping up the momentum to formalize more proofs and continue to strengthen the libraries and foundational work.
If that momentum is strongest with Lean so be it. At the same time things become more machine verifiable, converting to a new system will also become easier. It can already be strongly assisted using a general agent like Claude Code.
I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic[1].
[1] for those unfamiliar with math lingo, classical logic has a lot of powerful features. One of those is the law of the excluded middle, which says something can't be true and false at the same time. That means not not true is true, which you can't say in intuitionistic logic. Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound. There's quite a few results that depend on these techniques, so trying to do everything in intuitionistic logic has run into a lot of roadblocks.
> I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic
The computer science folks are now working on their own CSLib. https://www.cslib.iohttps://www.github.com/leanprover/cslib Given that intuitionistic logic is really only relevant to computational content (the whole point of it is to be able to turn a mathematical argument into a construction that could in some sense be computed with), it will be interesting to see how they deal with the issue. Note that if you write algorithms in Lean, you are already limited to some kind of construction, and perhaps that's all the logic you need for that purpose.
Intuitionistic logic is a refinement of classical logic, not a limitation: for every proposition you can prove in classical logic there is at least one equivalent proposition in intuitionistic logic. But when your use of LEM is tracked by the logic (in intuitionistic logic a proof by LEM can only prove ¬¬A, not A, which are not equivalent) it's a constant temptation to try to produce a constructive proof that lets you erase the sin marker.
In compsci that's actually sometimes relevant, because the programs you can extract from a ¬¬A are not the same programs you can extract from an A.
Classical logic has plenty of limitations/roadblocks, all logics do. Logic isn't a unified domain, but an infinite beach of structural shards, each providing a unique lens of study.
Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists.
Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.
Classical logic isn’t rejected in computer science. Computer science papers don’t generally care if their proofs are non-constructive, just like in mathematics.
This entire thread is making clear that constructivists want to speak on behalf of everyone while in the real word the vast majority of mathematicians or logicians don’t belong to their niche school of mathematics/philosophy.
Intuitionism is just disallowing the law of the excluded middle (that propositions are either true or they are not true). Disallowing non-constructive proofs is a related system to intuitionism called “constructivism”. There are rigorous formulations of mathematics that are constructive, intuitionist or even strict finitist.
But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction. Talking about oracles is useful in turn since it's a very general way of talking about side-conditions that might make something easier to construct.
Of course. Though it's also important to note: whether or not an object exists is dependent on the logic being utilized itself, which is to say nothing of how even if the object holds some structural equivalent in the given logic of attention, it might not have all provable structure shared between the two, and that's before we get into how the chosen axioms on top of the logical system also mutate all of this.
It's not that classical logic is useless, it's just that it's not particularly appropriate to choose as the basis for a system built on algorithms. This goes both ways. Set theory was taken as the foundation of arithmetic, et al. because type theory was simply too unwieldy for human beings scrawling algebras on blackboards.
I am absolutely not even close to being an expert on the topic, but type theory wasn't all that well understood even relatively recently - Voevodsky coined the Univalence axiom in 2009 or so, while sets have been used for centuries.
So not sure it would be "unwieldy", it's a remarkably simple addition and it may avoid some of the pain points with sets? But again, not even a mathematician.
Set theory was chosen because it was a compatively simple proof of concept. You don't really refer to the foundation when scrawling algebra on a blackboard the way you would with a proof assistant, and this actually causes all sorts of issues down the line (it's a key motivation for things like HoTT).
You're walking down a corridor. After hours and hours you ask "is it possible to figure how far it is to the nearest exit?". Your classical logic friend answers: "Yes, either there is no exit then the answer is infinity. Or there is an exit then we just have to keep walking until we find it. QED"
This kind of wElL AcTUaLly argument is not allowed in constructive logic.
As far as I understand it, classical mathematics is non-constructive. This means there are quite a few proofs that show that some value exists, but not what that value is. And in mathematics, a proof often depends on the existence of some value (you can't do an operation on nothing).
The thing is it can be quite useful to always know what a value is, and there's some cool things you can do when you know how to get a value (such as create an algorithm to get said value). I'm still learning this stuff myself, but inuitionistic logic gets you a lot of interesting properties.
> As far as I understand it, classical mathematics is non-constructive.
It's not as simple as that. Classical mathematics can talk about whether some property is computationally decidable (possibly with further tweaks, e.g. modulo some oracle, or with complexity constraints) or whether some object is computable (see above), express decision/construction procedures etc.; it's just incredibly clunky to do so, and it may be worthwhile to introduce foundations that make it natural to talk about these things.
Would it be fair to say then that classical mathematics does not require computability, so it requires a lot more bookkeeping, while intuitionistic logic requires constructivism, so it's the air you live and breathe in, which is much more natural?
Intuitionistic logic is not really constrained to talking about constructive things: you just stuff everything else in the negative fragment. Does that ultimately make sense? Maybe, maybe not. Perhaps that goes too far in obscuring the inherent duality of classical logic, which is still very useful.
It’s not intuitive, it’s intuitionist. I’m not saying that to nitpick it’s just important to make the distinction in this case because it really isn’t intuitive at all in the usual sense.
Why you would use it is it’s an alternative axiomatic framework so you get different results. The analogy is in geometry if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry. It’s a different geometry and is a worthy subject of study. One isn’t right and the other wrong, although people get very het up about intuitionism and other alternative axiomatic frameworks in mathematics like constructivism and finitism.
Thank you for the correction I actually didn't realise that so have learned something.
Specifically for people who are interested it seems you have to replace the parallel postulate with a postulate that says every point is a saddle point (which is like the centre point of a pringle if you know what that looks like).
In constructive logic, a proof of "A or B" consists of a pair (T,P). If T equals 0, then P proves A. If T equals 1, then P proves B. This directly corresponds to tagged union data types in programming. A "Float or Int" consists of a pair (Tag, Union). If Tag equals 0, then Union stores a Float. If Tag equals 1, then Union stores an Int.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
The difference only becomes evident when proving liveness/termination (since if your algorithm terminates successfully it has to construct something, and it only has to be proven that it's not incorrect) and then it turns out that these proofs do use something quite aligned to constructive logic.
... and also to classical logic. Liveness proofs typically require finding a variant that converges to some terminal value, and that's just as easy to do in classical logic as in constructive logic.
I've been using formal methods for years now and have yet to see where constructive logic makes things easier (I'm not saying it necessarily makes things harder, either).
How have you used the Curry Howard correspondence to make proving the correctness of non-trivial algorithms easier (than, say, Isabelle/HOL or TLA+ proofs)?
I think stuff like "synthetic topology", "synthetic differential geometry", "synthetic computability theory", "synthetic algebraic geometry" are the most promising applications at the moment.
It can also find commonalities between different abstract areas of maths, since there are a lot of exotic interpretations of intuitionistic logic, and doing mathematics within intuitionistic logic allows one to prove results which are true in all these interpretations simultaneously.
I'm not sure if intuitionism has a "killer app" yet, but you could say the same about every piece of theory ever, at least over its initial period of development. I think the broad lesson is that the rules of logic are a "coordinate system" for doing mathematics, and changing the rules of logic is like changing to a different coordinate system, which might make certain things easier. In some areas of maths, like modern Algebraic Geometry, the standard rules of logic might be why the area is borderline impenetrable.
These are more like computational-ish interpretations of sheaves, topological spaces, synthetic geometry etc. The link of intuitionistic logic to computation is close enough that these things don't really make it "non-computational". One can definitely argue though that many mathematicians are finding out that things like "expressing X in a topos" are effectively roundabout ways of discussing constructive logic and constructivity concerns.
This isn’t quite right. Classical logic doesn’t permit going from “it is impossible to disprove” to “true”. For example, the continuum hypothesis cannot be disproven in ZFC (which is formulated in classical logic (the axiom of choice implies the law of the excluded middle)), but that doesn’t let us conclude that the continuum hypothesis is true.
Rather, in classical logic, if you can show that a statement being false would imply a contradiction, you can conclude that the statement is true.
In intuitionistic logic, you would only conclude that the statement is not false.
And, I’m not sure identifying “true” with “provable” in intuitionistic logic is entirely right either?
In intuitionistic logic, you only have a proof if you have a constructive proof.
But, like, that doesn’t mean that if you don’t have a constructive proof, that the statement is therefore not true?
If a statement is independent of your axioms when using classical logic, it is also independent of your axioms when using intuitionistic logic, as intuitionistic logic has a subset of the allowed inference rules.
If a statement is independent, then there is no proof of it, and there is no proof of its negation. If a proposition being true was the same thing as there being a proof of it, then a proposition that is independent would be not true, and its negation would also be not true.
So, it would be both not true and not false, and these together yield a contradiction.
Intuitionistic logic only lets you conclude that a proposition is true if you have a constructive/intuitionistic proof of it. It doesn’t say that a proposition for which there is no proof, is therefore not true.
As a core example of this, in intuitionistic logic, one doesn’t have the LEM, but, one certainly doesn’t have that the LEM is false. In fact, one has that the LEM isn’t false.
Ah, so if you had ¬p, and you negated it, you could technically construct ¬¬p in intuitionist logic, but only in classical logic could you reduce that to p? Since truth in classical logic means what you said here, where you didn't actually construct what p is, so you can't reduce it in intuitionistic logic.
> One of those is the law of the excluded middle, which says something can't be true and false at the same time.
That would be the law of non-contradiction (LNC). The law of the excluded middle (LEM) says that for every proposition it is true or its negation is true.
LEM: For all p, p or not p.
LNC: For all p, not (p and not p).
Classical logic satisfies both, intuitionistic logic only satisfies LNC.
> Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound.
As far as lean is concerned, this isn't an example of classical logic. It's just the definition of "not" - to say that some proposition implies a contradiction, and to say that that proposition is untrue, are the same statement.
Most "something"s that you'd want to prove this way will require a step from classical logic, but not all of them. (¬p ⟶ F) ⟶ p is classical; (p ⟶ F) ⟶ ¬p is constructive.
More generally, any negative statements can be proven classically, even in intuitionistic logic. Intuitionistic logic does not have the De Morgan duality found in classical logic, you have to go to linear logic if you want to recover that while keeping constructivity. (The "negative" in linear logic actually models requesting some object, which is dual to constructing it. The connection with the usual meaning of "negative" in logic involves a similar duality between "proposing" a proof and "challenging" it.)
So proof by contradiction proves ¬p, but it requires the law of excluded middle to prove p (in the case of ¬p -> F)? I didn't realize that was constructive in the first case.
Well, at some point you have to define what you mean by "proof by contradiction". I was responding to your statement, "prove something by showing that the alternative is unsound". You can prove that something is false that way without needing classical logic.
For what's happening with `¬p -> F`, recall that this is by definition the statement `¬¬p`; classical logic will let you conclude `p` from `¬¬p`, or it will let you apply the law of the excluded middle to conclude that either `p` or `¬p` must be the case, and then show that since it isn't `¬p`, it must be `p`. (Again, not really different approaches, but perhaps different in someone's mental model.)
On the other hand, if you have `p -> F`, that is by definition the statement `¬p`, and if you've established `¬p`, you've already finished proving that p is false.
Something that I find particularly absurd about the hypothetical distinction between intuitionistic and classical logic is that intuitionistic logic is sufficient to prove `¬p` from `¬¬¬p`. (This is quite similar to how 'proof by contradiction' is constructive if you're proving a negative but not if you're proving a positive; it might be the same result.) So for any proposition that can be restated in a "negative" way, the law of the excluded middle remains true in intuitionistic logic. The difference lies only in "fundamentally positive" propositions. (You can do that proof yourself at https://incredible.pm/ ; it's in section 4, `((A→⊥)→⊥)→⊥` -> `A→⊥`.)
> Martin-Löf designed his type theory with the aim that AC should be provable and in his landmark Constructive mathematics and computer programming presented a detailed derivation of it as his only example. Briefly, if (∀x : A)(∃y :B) C(x,y) then (∃f : A → B)(∀x : A) C(x, f(x)).
> Spoiling the party was Diaconescu’s proof in 1975 that in a certain category-theoretic setting, the axiom of choice implied LEM and therefore classical logic. His proof is reproducible in the setting of intuitionistic set theory and seems to have driven today’s intuitionists to oppose AC.
> It’s striking that AC was seen not merely as acceptable but clear by the likes of Bishop, Bridges and Dummett. Now it is being rejected and the various arguments against it have the look of post-hoc rationalisations. Of course, the alternative would be to reject intuitionism altogether. This is certainly what mathematicians have done: in my experience, the overwhelming majority of constructive mathematicians are not mathematicians at all. They are computer scientists.
Yeah, I suppose I was playing fast and loose with the terminology to make it more approachable. iirc, the definition of proof by contradiction is you assume the negation, show that the negation has something that is both true and not true, and hence the negation is logically unsound. Since you can technically derive anything from an unsound system, you derive that the negation is false, and then by the laws of excluded middle and non-contradiction, you know that p must be true.
But now I see from what you mentioned that this means that if you don't do the negation elimination, then you can still show `¬¬p` in an intuitionistic logic system.
Is proof by contradiction of a false statement just a counterexample? Because a counterexample shows that the statement is incoherent, so the negation must be true. And you have to construct a counterexample.
A counterexample as generally understood would be a constructive refutation: it takes ~p as a request to provide p constructively and does just that. Proof by contradiction is much more general than that. Of course the problem of extracting the residual constructive content from a proof by contradiction (explaining how it is in some sense constructing some vastly generalized counterexample) is non-trivial.
Constructivists don't call a proof of ¬p a "proof by contradiction", they just call it a proof of ¬p. To them, a "proof by contradiction" of some p that isn't in the negative fragment is just nonsense, because constructive logic doesn't have the kind of duality that even makes it necessary to talk about contradiction as a kind of proof to begin with. They'd see the classical use of "proof by contradiction" as a clunky way of saying "I've actually only proven a negative statement, and now I can use De Morgan duality to pretend that I proved a positive."
Which logic are you saying “can’t encode the speculative moment”?
I think the two logics can emulate one another? Or, at the very least, can describe what the other concludes. I know intuitionistic logic can have classical logic embedded in it through some sort of “put double negation on everything”. I think if you add some sort of modal operator to classical logic you could probably emulate intuitionistic logic in a similar way?
You don't even need to add a modal operator since modal logic itself can be embedded in classical logic via possible-world semantics. Of course the whole thing becomes a bit clunky - but that's the argument for starting with intuitionistic logic, where you wouldn't need to do that.
Feels like all the write ups that point out the short comings of eg Python for scientific computing.
Sure, except that once you have a community at critical mass around a reasonably good tool, that trumps most other things. Momentum builds. People write tutorials, explainers, better documentation, etc. it hits escape velocity.
Feels like Lean, with Terrance Tao as a strong proponent / cheerleader, is in that space.
Everyone who argues “but language X is better” … may not be wrong, but they are not making the argument that matters. Is it better than the thing everyone else knows and can use and has more people hours going into it to improve it?
Feels like a “worse is better” situation; or maybe “good and popular is good enough”.
> once you have a community at critical mass around a reasonably good tool, that trumps most other things
This matters a lot less in the age of AI. AI doesn't need a massive number of community-built libraries, it can just write its own. It doesn't need a million tutorials floating out there on the interwebs because unlike most programmers, it will actually read the spec and documentation (tutorials are just projections of the docs/spec anyway). AI doesn't have to avoid languages with no job market because it just needs to do the job at hand, not build a career.
For every "well of course, just...X, that's what everybody does" group-think argument there's a cogent case to be made for at least considering the alternatives. Even if you ultimately reject the alternatives and go with the crowd, you will be better off knowing the landscape.
Every time you go off the beaten path, you're locking yourself into less documentation, more bugs (since there's less exploration of the dark corners), and fewer people you can go to for help. If you've got 20+ choices to make, picking the standard option is the right choice on average, so you can just do it and move on. You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
The exceptions to this are when a) it becomes apparent that the standard tool doesn't actually fit your use case, or b) the standard tool significantly overlaps the core problem you're trying to solve.
> You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
Reading that took five minutes and gave a good intro to the counter argument to Curry-Howard-all-the-things monomania. If having invested those five minutes, Lean still seems like the way to go (for whatever reason) fine. You are making a (closer to) informed choice, and likely better of than if you'd spent those five minutes doubling down on the conventional solution.
Most deviations from the group consensus are mistakes, but all progress comes from seeing past the group consensus. So making frequent small bets on peeking around your blinders is a good strategy.
Which shows the lie of the common engineering trope "use the right tool for the job."
It really should be "use the same tool that everyone else is using so you don't have decide which tool is the right one -- the herd made that decision for you!"
"I believe that almost anything that has been formalised today in any system could have been formalised in AUTOMATH. Its main drawbacks were its notation, which really was horrible, and its complete lack of automation. Proofs were long and unreadable."
That's like saying that anything that could be programmed today in your modern language of choice could have been programmed 50 years ago in assembly. Technically yes, economically no.
What about the performance characteristics of the Lean programs? I know it is a natively compiled language, but is the code it produces comparable to that of modern system programming languages in terms of performance?
The author appears to have a serious misconception about Lean, which is surprising since he seems to be quite knowledgeable in the area.
Specifically, the author seems to be under the impression that Lean retains proof objects and the final proof to be checked is one massive proof object, with all definitions unfolded: "these massive terms are unnecessary, but are kept anyway" (TFA). This couldn't be further from the truth. Lean implements exactly the same optimization as the author cherishes in LCF; metaphorically, that "The steps of a proof would be performed but not recorded, like a mathematics lecturer using a small blackboard who rubs out earlier parts of proofs to make space for later ones" (quoted by blog post linked from TFA). Once a `theorem` (as opposed to a `def`) is written in Lean4, then the proof object is no longer used. This is not merely an optimization but a critical part of the language: theorems are opaque. If the proof term is not discarded (and I'm not sure it isn't), then this is only for the sake of user observability in the interactive mode; the kernel does not and cannot care what the proof object was.
A proof object in dependent type theory is just the term that inhabits a type. So are you saying the Lean implementation can construct proofs without constructing such a term?
No, I'm saying it is checked and then discarded. (Or at least, discarded by the kernel. Presumably it ends up somewhere in the frontend's tactic cache.) That matches perfectly the metaphor, "rubs out earlier parts of proofs to make space for later ones".
The shared misconception seems to be in believing that because _conceptually_ the theory implemented by Lean builds up a massive proof term, that _operationally_ the Lean kernel must also be doing that. This does not follow. (Even the concept is not quite right since Lean4 is not perfectly referentially transparent in the presence of quotients.)
One of those names that forces a double take when seen disconnected from context:
'Lean or purple drank is a polysubstance drink used as a recreational drug. It is prepared by mixing prescription-grade cough or cold syrup containing an opioid drug '
proving that one of the hardest problem in CS - 'naming things' still keeps on keeping on.
It’s been decades since I could claim to know anything about this field so I’m probably completely wrong in how I read this, but the idea that one might build a theorem prover (“ML!”) for one’s non-ML programming language and have the prover itself accidentally be a really good general purpose programming language … is very funny.
I would’ve expected that people who like computers will converge around something like Idris. It’s marketed as a development tool, not a tool for formalizing mathematics even though it could be used as a proof assistant.
The set theorists decided that mathematics is the overarching superdomain over all study of structure. You don't get to pick and choose. Either mathematics is a suburb of logic and these two things are separate, or they're not and ZFC dogmatics need to accept they don't have a monopoly on math.
I of course fully support reinstating logicism, but the same dogmatics love putting up a fight over that as well.
I think the most surprising thing I've learned taking formal math in college is just how much mathematicians are pragmatists (at least for my teacher with sample size n=1). They're much more interested in new ways to think about ideas, with a side effect of proofs for correctness. The proof is more about explaining why something works, not that it does.
I'm going to take a formal logic class in the fall, and my professor said something akin to "definitely take it if you're interested, just be aware that it probably won't come in use in most of the mathematics done today." The thing is the foundations are mostly laid, and people are interested in using said foundations for interesting things, not for constantly revisiting the foundations.
I think this is one reason most mathematicians don't see a need for formal proof assistants, since from their perspective it's one very small part of math, and not the interesting one.
This is not to say that proof assistants are a dead end—I find them fascinating and hope they continue to grow—but there's a reason that they haven't had a ton of traction.
I think that's a good way of putting it. I would addend that most people working in mathematics aren't generalists, their primary interest isn't in a broad picture. Rather, most are hyperfocused into a single domain with a strong backbone of reflexive intuition built up. By virtue of sheer human limitation, there's only so much someone can care about what's happening outside of their world while still making serious contributions within it. This doesn't even just extend all the way to shifting foundations, but number theorists can hardly be expected to keep up with the forefront of graph theory, for example.
For the pragmatists Logic as a field commits the immortal sin: it blasphemes the intuition that mathematicians spend years honing by obliterating it. Not just for a singular domain, but for all domains. Of course, that doesn't really explain the whole picture. Formalism built a holy walled city. Logicians, by nature of their work, leave the safety of the walled city to survey, exploit and die in the tangled jungle outside. Some don't even speak the holy language of the glorious walled city, they talk in absolutely gibberish modalities and hyperstructures. There is a political tension held against logic and logicians as a result.
Not even the most dogmatic of the set theorists ever argued mathematics was possible without reason, however. For mathematics, logic is the world, as the copula makes no distinction between substance and existence. In the same sense that the earth is not matter itself, but it is a material thing.
Putting that aside, to make things more clear: computer science is mathematics. Computer scientists are mathematicians. That was a categorization decided long before you and I ever lived. In the sense that you mean, you're only referring to a very small fraction of what "mathematics" refers to In the true sense of the word. It is just as irreconcilably disjointed as Logic is, not unified and fundamentally non-unifiable.
I too think it would be better if "mathematics" was reserved for the gated suburb of ZFC. But that's not the world we live in, courtesy of the same people who pushed ZFC as a foundation to begin with.
Terence Tao, one of the most important living mathematicians, specifically embraces Lean and has been helping the community embrace it.
What you've done here is an overgeneralization. "People who like math" and "people who like computers" are massive demographics with considerable overlap.
Formalised proofs and Lean in particular are still too cumbersome for the ``working'' mathematician to use it day-to-day for research-level math. But clearly there is some interest on where it may take us in future.
He's a Field's Medalist so that's automatically one of the most important living. He is good at explaining things; I leaned on his Analysis textbooks when I was taking analysis and functional analysis to great effect; in a research class I was trying to calculate Fourier transforms of algebraic sets and found various almost throw away comments on his blog that were extremely enlightening (alas only to the extent I could follow them). He's a legit great mind of modern mathematics - and also able to communicate well; a historical rarity indeed.
The topic being discussed here has been addressed specifically in his mastodon posts - the pre-LLM math process was understood to be "state a proof, validate a proof" - but the real aim was the not explicit "digest the proof so we can extend to other results" - with LLM/lean making the first two a lot easier and possible to do without accomplishing the digestion into the mathematical community; so we need to add "digesting proofs (from where ever)" into the job description. Thread starts here: https://mathstodon.xyz/@tao/116477351524980995
The link is exactly what I’m saying. I only hear cs people talk about it.
For mathematicians a proof is a means to an end, or a medium of expression - they care about what they say and why.
The correspondence isn’t about C programs corresponding to proofs in math papers. It’s a very a specific claim about kinds of formal systems which don’t resemble how math or programming is done.
I think they're talking about conjectures that are unproven but seem "likely true" and people build further math off the assumption. E.G Reimann hypothesis
For the HN crowd who are generally programmers but not necessarily mathematicians, it’s more relevant to consider the programming side of things. There is a very good book (one I haven’t finished unfortunately) that covers Lean from a functional programming perspective rather than proving mathematics perspective: https://leanprover.github.io/functional_programming_in_lean/
I am learning Lean myself so forgive me as I have an overly rosy picture of it as a beginner. My current goal is to write and prove the kind of code normal programmers would write, such as real-world compression/decompression algorithms as in the recent lean-zip example: https://github.com/kiranandcode/lean-zip/blob/master/Zip/Nat...
While reading though this book, I messed around with a basic computer algebra simplifier in Lean:
https://github.com/dharmatech/symbolism.lean
It's a port of code from C#.
Lean is astonishingly expressive.
I recall an experiment in proving software correct from the 1990s that found more errors in the final proof annotations than in the software it had proved correct.
Then, I foresee 2 other obstacles, 1 of which may disappear:
1. Actually knowing what the software is supposed to do is hard. Understanding what the users actually want to do and what the customers are paying to do --which aren't necessarily the same thing--is complex
2. The quality of the work of many software developers is abysmal and I don't know why they would be better at a truth language than they are at Java or some other language.
Objection 2 may disappear to be replaced with AI systems with the attention to do what needs to be done. So perhaps things will change in that to make it worthwhile...
I think the hope for 2 is that those programmers would be forced into inaction by the language safety, rather than being allowed to cause problems.
I don't really think that works either, because there's endless ways to add complication even if you can't worsen behavior (assuming that's even possible). At best they might be caught eventually... but anyone who has worked in a large tech company knows at least a few people who are somehow still employed years later, despite constant ineptitude. Play The Game well enough and it's probably always possible.
Re 1: Discussing and guiding the desirable theorems for general-purpose programs has been a major challenge for us. Proofs for their own sake (bad?) vs glorious general results (good but hard?). Actual human guidance there can be critical there at least for now.
Mind linking the experiment? Sounds interesting.
what about non-functional programming?
FP is just as irrelevant for most programmers as the math you already shoved aside
Hmm like the “new” JS Fetch api with `then` chaining? What about map, filter, reduce? Anonymous functions? List comprehensions? FP is everywhere. Pure FP code isn’t seen very often, as side effects are necessary for most classes of programs, but neither is pure OOP code, as not everything is dynamically dispatched, nor imperative code, as Objects or functions may more cleanly describe/convey something in code.
I shoved math aside because I think for most of the HN crowd it wouldn’t be a good use of their time to do what mainstream mathematics is about, like the “things such as Grothendieck schemes and perfectoid spaces” the article also references. FP is much more relevant because for any program for which a proof of correctness is worthwhile, you can always extract a functional core of that program (functional core, imperative shell). And that functional core will be easier to prove than if it were written in an imperative style.
FP and math are the same concept.
The semantics of math are equation based.
Everything in the math universal language is defined as an expression or formula.
All proofs are based on this concept.
To translate this into programming think about what programming is? Programming rather being a single line formula is a series of procedures.
in functional programming you get rid of that and you think from the perspective of how much of a program can you fit into a single one liner? An expression? Think map, reduce, list comprehensions, etc.That is essentially what functional programming is. Fitting your entire program onto one line OR fitting it into a math expression.
The reason why you see multiple lines in FP languages is because of aliasing.
is really: This is also isomorphic to the concept of immutability. By making things immutable your just aliasing part of the one liner...So functional programming, one line programs, formulas and equations in math, and immutability are essentially ALL the same concept.
That is why lean is functional. Because it's math.
People tell me Lean is really good for functional programming. However, coming from Agda, it feels like a pretty clunky downgrade. They also tell me it's good for tactics, but I've found Coq's tactics more powerful and ergonomic. Maybe these are all baby-duck perceptions. So far, it feels like Lean's main strength isn't being the best at anything, but being decent at everything and having a huge community. I see the point and appeal, but it's saddens me that a bit of the beauty and power are lost in exchange.
In other words, it is a network effect.
My perspective is that network effects are far less long-lasting than they feel in the moment. For example if being decent at everything and having a huge community was the only thing that mattered, Perl would still be a big deal. Many similar examples exist.
In the case of Lean, being the first with a huge library really makes a difference. Just as Perl got a big boost from having CPAN. (Which was an imitation of CTAN, except for a programming language instead of TeX.)
But, based on scaling laws, we should expect the value of a large library for most users to grow around the log of the size of the library. (See https://pdodds.w3.uvm.edu/teaching/courses/2009-08UVM-300/do... for the relevant scaling laws.)
When your library is small, this looks like an insurmountable barrier. But you don't have to match the scale for factors of usability to become more important. And porting mathematical libraries is a good target for LLMs. The source is verified, the target is verifiable, and the reasoning path generally ports.
The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect. Because if their library can be ported to your platform, then your proof can probably be ported to their platform as well!
Thing is, it comes after both. Maybe it is just being a jack of all trades, but something made it success when the others remain fairly niche.
I think its pretty clear that being too early has been as bad as being too late for most technologies. There are a few that have gradually gained community after decades but it is easier to make a poor copy of one of them and have better momentum and less skepticism.
When I woke up this morning I could not have predicted someone calling a proof assistant a "Jack of all trades"
fyi it's Rocq now: https://en.wikipedia.org/wiki/Rocq
Isabelle/HOL as a language is nice, but the tooling has deep flaws even outside the pure desktop-first app approach.
The language is different (not necessarily better) in comparison to Lean, but I do agree with some of the points on dependent types. It seems both languages mostly just made different tradeoffs, which imo, were fair and have shaped them into quite efficient tools for their domains. The domain of "proofs" is large and different paradigms just have different strengths/weaknesses, Lean just specialized for a different part of this space.
Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean. It might also be nice to use for explorative phases but is a resource hog, it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
Working on Isabelle itself however is painful (especially communicating with developers) in comparison to Lean. Things like "we don't have bugs just unexpected behaviour" on the mailing list just seems childish/unprofessional. The callout to RAM consumption of Lean and related systems is also a bit of joke when looking at Isabelle's gluttony for RAM.
> but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
One issue with this is that coming up with a quickly-checkable certificate for UNSAT is not exactly a trivial problem. It's effectively the same as writing a formal proof.
Last I checked, Isabelle/HOL used a custom Emacs mode as their interface. (I could be mixing it up with one of the other HOLs).
> Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean.
I have no knowledge of what sledgehammer is. However...
> it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer"
This description makes sledgehammer sound identical to Mathlib's `grind`. https://leanprover-community.github.io/mathlib4_docs/Init/Gr...
The most important thing is keeping up the momentum to formalize more proofs and continue to strengthen the libraries and foundational work.
If that momentum is strongest with Lean so be it. At the same time things become more machine verifiable, converting to a new system will also become easier. It can already be strongly assisted using a general agent like Claude Code.
I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic[1].
[1] for those unfamiliar with math lingo, classical logic has a lot of powerful features. One of those is the law of the excluded middle, which says something can't be true and false at the same time. That means not not true is true, which you can't say in intuitionistic logic. Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound. There's quite a few results that depend on these techniques, so trying to do everything in intuitionistic logic has run into a lot of roadblocks.
> I think what's interesting about Lean is that Lean is a language, and what most people are talking about is a library called Mathlib. From what I've read about Mathlib, the creators are very pragmatic, which is why they encode classical logic in Lean types, with only a bit of intuitionistic logic
The computer science folks are now working on their own CSLib. https://www.cslib.io https://www.github.com/leanprover/cslib Given that intuitionistic logic is really only relevant to computational content (the whole point of it is to be able to turn a mathematical argument into a construction that could in some sense be computed with), it will be interesting to see how they deal with the issue. Note that if you write algorithms in Lean, you are already limited to some kind of construction, and perhaps that's all the logic you need for that purpose.
Five stages of accepting constructive mathematics:
Denial
Anger
Bargaining
Depression
Acceptance
A talk about constructive mathematics by Andrej Bauer at the Institute for Advanced Study.
https://www.youtube.com/watch?v=21qPOReu4FI
http://dx.doi.org/10.1090/bull/1556
>the law of the excluded middle, which says something can't be true and false at the same time
This is the not excluded middle, it is the "Law of noncontradiction"
Excluded middle means: either p is true or the negation of p is true
https://en.wikipedia.org/wiki/Law_of_noncontradiction
You mean intuitionistic logic, not "intuitive logic".
Oops, just edited. I'm still fairly new to this area, so I keep mixing up my terms :)
When/why would one prefer to use intuitive logic, given the limitations/roadblocks?
Intuitionistic logic is a refinement of classical logic, not a limitation: for every proposition you can prove in classical logic there is at least one equivalent proposition in intuitionistic logic. But when your use of LEM is tracked by the logic (in intuitionistic logic a proof by LEM can only prove ¬¬A, not A, which are not equivalent) it's a constant temptation to try to produce a constructive proof that lets you erase the sin marker.
In compsci that's actually sometimes relevant, because the programs you can extract from a ¬¬A are not the same programs you can extract from an A.
Classical logic has plenty of limitations/roadblocks, all logics do. Logic isn't a unified domain, but an infinite beach of structural shards, each providing a unique lens of study.
Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists. Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.
Classical logic isn’t rejected in computer science. Computer science papers don’t generally care if their proofs are non-constructive, just like in mathematics.
This entire thread is making clear that constructivists want to speak on behalf of everyone while in the real word the vast majority of mathematicians or logicians don’t belong to their niche school of mathematics/philosophy.
Do you understand the irony in posting this on a comment chain ostensibly rejecting foundational objectivism?
Intuitionism is just disallowing the law of the excluded middle (that propositions are either true or they are not true). Disallowing non-constructive proofs is a related system to intuitionism called “constructivism”. There are rigorous formulations of mathematics that are constructive, intuitionist or even strict finitist.
What point are you responding to?
THe parent of my post referred to disallowing non-constructive proofs, which is not a feature of intuitionist logic but of constructivism.
They care quite a bit actually, they just call their constructive proofs "algorithms" or "decision procedures".
But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction. Talking about oracles is useful in turn since it's a very general way of talking about side-conditions that might make something easier to construct.
Of course. Though it's also important to note: whether or not an object exists is dependent on the logic being utilized itself, which is to say nothing of how even if the object holds some structural equivalent in the given logic of attention, it might not have all provable structure shared between the two, and that's before we get into how the chosen axioms on top of the logical system also mutate all of this.
It's not that classical logic is useless, it's just that it's not particularly appropriate to choose as the basis for a system built on algorithms. This goes both ways. Set theory was taken as the foundation of arithmetic, et al. because type theory was simply too unwieldy for human beings scrawling algebras on blackboards.
I am absolutely not even close to being an expert on the topic, but type theory wasn't all that well understood even relatively recently - Voevodsky coined the Univalence axiom in 2009 or so, while sets have been used for centuries.
So not sure it would be "unwieldy", it's a remarkably simple addition and it may avoid some of the pain points with sets? But again, not even a mathematician.
Set theory was chosen because it was a compatively simple proof of concept. You don't really refer to the foundation when scrawling algebra on a blackboard the way you would with a proof assistant, and this actually causes all sorts of issues down the line (it's a key motivation for things like HoTT).
You're walking down a corridor. After hours and hours you ask "is it possible to figure how far it is to the nearest exit?". Your classical logic friend answers: "Yes, either there is no exit then the answer is infinity. Or there is an exit then we just have to keep walking until we find it. QED"
This kind of wElL AcTUaLly argument is not allowed in constructive logic.
As far as I understand it, classical mathematics is non-constructive. This means there are quite a few proofs that show that some value exists, but not what that value is. And in mathematics, a proof often depends on the existence of some value (you can't do an operation on nothing).
The thing is it can be quite useful to always know what a value is, and there's some cool things you can do when you know how to get a value (such as create an algorithm to get said value). I'm still learning this stuff myself, but inuitionistic logic gets you a lot of interesting properties.
> As far as I understand it, classical mathematics is non-constructive.
It's not as simple as that. Classical mathematics can talk about whether some property is computationally decidable (possibly with further tweaks, e.g. modulo some oracle, or with complexity constraints) or whether some object is computable (see above), express decision/construction procedures etc.; it's just incredibly clunky to do so, and it may be worthwhile to introduce foundations that make it natural to talk about these things.
Would it be fair to say then that classical mathematics does not require computability, so it requires a lot more bookkeeping, while intuitionistic logic requires constructivism, so it's the air you live and breathe in, which is much more natural?
Intuitionistic logic is not really constrained to talking about constructive things: you just stuff everything else in the negative fragment. Does that ultimately make sense? Maybe, maybe not. Perhaps that goes too far in obscuring the inherent duality of classical logic, which is still very useful.
It’s not intuitive, it’s intuitionist. I’m not saying that to nitpick it’s just important to make the distinction in this case because it really isn’t intuitive at all in the usual sense.
Why you would use it is it’s an alternative axiomatic framework so you get different results. The analogy is in geometry if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry. It’s a different geometry and is a worthy subject of study. One isn’t right and the other wrong, although people get very het up about intuitionism and other alternative axiomatic frameworks in mathematics like constructivism and finitism.
> if you exclude the parallel postulate but use all of the other axioms from Euclid you get hyperbolic geometry
No, you don't.
(You need to replace the parallel postulate with a different one)
Thank you for the correction I actually didn't realise that so have learned something.
Specifically for people who are interested it seems you have to replace the parallel postulate with a postulate that says every point is a saddle point (which is like the centre point of a pringle if you know what that looks like).
https://en.wikipedia.org/wiki/Pringles
I think they called it intuitive, because I called it intuitive in my original post, so that's on me :)
In constructive logic, a proof of "A or B" consists of a pair (T,P). If T equals 0, then P proves A. If T equals 1, then P proves B. This directly corresponds to tagged union data types in programming. A "Float or Int" consists of a pair (Tag, Union). If Tag equals 0, then Union stores a Float. If Tag equals 1, then Union stores an Int.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
> Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
I don't know who "we" are, but most proofs of algorithm correctness use classical logic.
Also, there's nothing "obvious" about what you said unless you want proof objects, and why you'd want that is far from obvious in itself.
The difference only becomes evident when proving liveness/termination (since if your algorithm terminates successfully it has to construct something, and it only has to be proven that it's not incorrect) and then it turns out that these proofs do use something quite aligned to constructive logic.
... and also to classical logic. Liveness proofs typically require finding a variant that converges to some terminal value, and that's just as easy to do in classical logic as in constructive logic.
I've been using formal methods for years now and have yet to see where constructive logic makes things easier (I'm not saying it necessarily makes things harder, either).
You aren’t giving any justification why proofs should necessarily map to data structures.
Not necessarily, I only argue for utility. You can find better justification in the Curry-Howard correspondence.
How have you used the Curry Howard correspondence to make proving the correctness of non-trivial algorithms easier (than, say, Isabelle/HOL or TLA+ proofs)?
There are non-computational interpretations of intuitionistic logic too, like every single thing mentioned on this page: https://ncatlab.org/nlab/show/synthetic+mathematics
I think stuff like "synthetic topology", "synthetic differential geometry", "synthetic computability theory", "synthetic algebraic geometry" are the most promising applications at the moment.
It can also find commonalities between different abstract areas of maths, since there are a lot of exotic interpretations of intuitionistic logic, and doing mathematics within intuitionistic logic allows one to prove results which are true in all these interpretations simultaneously.
I'm not sure if intuitionism has a "killer app" yet, but you could say the same about every piece of theory ever, at least over its initial period of development. I think the broad lesson is that the rules of logic are a "coordinate system" for doing mathematics, and changing the rules of logic is like changing to a different coordinate system, which might make certain things easier. In some areas of maths, like modern Algebraic Geometry, the standard rules of logic might be why the area is borderline impenetrable.
These are more like computational-ish interpretations of sheaves, topological spaces, synthetic geometry etc. The link of intuitionistic logic to computation is close enough that these things don't really make it "non-computational". One can definitely argue though that many mathematicians are finding out that things like "expressing X in a topos" are effectively roundabout ways of discussing constructive logic and constructivity concerns.
Excluded-middle `true` means "[provable] OR [impossible to disprove]".
Intuitionist/Constructivist `true` means, "provable".
The question you are asking determines what answers are acceptable.
Why build an airplane, if you already know it must be possible?
This isn’t quite right. Classical logic doesn’t permit going from “it is impossible to disprove” to “true”. For example, the continuum hypothesis cannot be disproven in ZFC (which is formulated in classical logic (the axiom of choice implies the law of the excluded middle)), but that doesn’t let us conclude that the continuum hypothesis is true.
Rather, in classical logic, if you can show that a statement being false would imply a contradiction, you can conclude that the statement is true.
In intuitionistic logic, you would only conclude that the statement is not false.
And, I’m not sure identifying “true” with “provable” in intuitionistic logic is entirely right either?
In intuitionistic logic, you only have a proof if you have a constructive proof.
But, like, that doesn’t mean that if you don’t have a constructive proof, that the statement is therefore not true?
If a statement is independent of your axioms when using classical logic, it is also independent of your axioms when using intuitionistic logic, as intuitionistic logic has a subset of the allowed inference rules.
If a statement is independent, then there is no proof of it, and there is no proof of its negation. If a proposition being true was the same thing as there being a proof of it, then a proposition that is independent would be not true, and its negation would also be not true. So, it would be both not true and not false, and these together yield a contradiction.
Intuitionistic logic only lets you conclude that a proposition is true if you have a constructive/intuitionistic proof of it. It doesn’t say that a proposition for which there is no proof, is therefore not true.
As a core example of this, in intuitionistic logic, one doesn’t have the LEM, but, one certainly doesn’t have that the LEM is false. In fact, one has that the LEM isn’t false.
Ah, so if you had ¬p, and you negated it, you could technically construct ¬¬p in intuitionist logic, but only in classical logic could you reduce that to p? Since truth in classical logic means what you said here, where you didn't actually construct what p is, so you can't reduce it in intuitionistic logic.
> Excluded-middle `true` means "[provable] OR [impossible to disprove]".
> Intuitionist/Constructivist `true` means, "provable".
This is completely wrong. Excluded-middle `true` means "provable" and only "provable". "Impossible to disprove" is `independent`, not `true`.
For ideological reasons.
> One of those is the law of the excluded middle, which says something can't be true and false at the same time.
That would be the law of non-contradiction (LNC). The law of the excluded middle (LEM) says that for every proposition it is true or its negation is true.
LEM: For all p, p or not p.
LNC: For all p, not (p and not p).
Classical logic satisfies both, intuitionistic logic only satisfies LNC.
> Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound.
As far as lean is concerned, this isn't an example of classical logic. It's just the definition of "not" - to say that some proposition implies a contradiction, and to say that that proposition is untrue, are the same statement.
Most "something"s that you'd want to prove this way will require a step from classical logic, but not all of them. (¬p ⟶ F) ⟶ p is classical; (p ⟶ F) ⟶ ¬p is constructive.
More generally, any negative statements can be proven classically, even in intuitionistic logic. Intuitionistic logic does not have the De Morgan duality found in classical logic, you have to go to linear logic if you want to recover that while keeping constructivity. (The "negative" in linear logic actually models requesting some object, which is dual to constructing it. The connection with the usual meaning of "negative" in logic involves a similar duality between "proposing" a proof and "challenging" it.)
So proof by contradiction proves ¬p, but it requires the law of excluded middle to prove p (in the case of ¬p -> F)? I didn't realize that was constructive in the first case.
Well, at some point you have to define what you mean by "proof by contradiction". I was responding to your statement, "prove something by showing that the alternative is unsound". You can prove that something is false that way without needing classical logic.
Mathlib defines `by_contradiction` as a theorem proving `(¬p → False) → p` for any proposition p. ( https://leanprover-community.github.io/mathlib4_docs/Mathlib... ) This does require classical logic.
For what's happening with `¬p -> F`, recall that this is by definition the statement `¬¬p`; classical logic will let you conclude `p` from `¬¬p`, or it will let you apply the law of the excluded middle to conclude that either `p` or `¬p` must be the case, and then show that since it isn't `¬p`, it must be `p`. (Again, not really different approaches, but perhaps different in someone's mental model.)
On the other hand, if you have `p -> F`, that is by definition the statement `¬p`, and if you've established `¬p`, you've already finished proving that p is false.
Something that I find particularly absurd about the hypothetical distinction between intuitionistic and classical logic is that intuitionistic logic is sufficient to prove `¬p` from `¬¬¬p`. (This is quite similar to how 'proof by contradiction' is constructive if you're proving a negative but not if you're proving a positive; it might be the same result.) So for any proposition that can be restated in a "negative" way, the law of the excluded middle remains true in intuitionistic logic. The difference lies only in "fundamentally positive" propositions. (You can do that proof yourself at https://incredible.pm/ ; it's in section 4, `((A→⊥)→⊥)→⊥` -> `A→⊥`.)
There's a fun article on this very blog telling a similar story: https://lawrencecpaulson.github.io/2021/11/24/Intuitionism.h...
> Martin-Löf designed his type theory with the aim that AC should be provable and in his landmark Constructive mathematics and computer programming presented a detailed derivation of it as his only example. Briefly, if (∀x : A)(∃y :B) C(x,y) then (∃f : A → B)(∀x : A) C(x, f(x)).
> Spoiling the party was Diaconescu’s proof in 1975 that in a certain category-theoretic setting, the axiom of choice implied LEM and therefore classical logic. His proof is reproducible in the setting of intuitionistic set theory and seems to have driven today’s intuitionists to oppose AC.
> It’s striking that AC was seen not merely as acceptable but clear by the likes of Bishop, Bridges and Dummett. Now it is being rejected and the various arguments against it have the look of post-hoc rationalisations. Of course, the alternative would be to reject intuitionism altogether. This is certainly what mathematicians have done: in my experience, the overwhelming majority of constructive mathematicians are not mathematicians at all. They are computer scientists.
Yeah, I suppose I was playing fast and loose with the terminology to make it more approachable. iirc, the definition of proof by contradiction is you assume the negation, show that the negation has something that is both true and not true, and hence the negation is logically unsound. Since you can technically derive anything from an unsound system, you derive that the negation is false, and then by the laws of excluded middle and non-contradiction, you know that p must be true.
But now I see from what you mentioned that this means that if you don't do the negation elimination, then you can still show `¬¬p` in an intuitionistic logic system.
Is proof by contradiction of a false statement just a counterexample? Because a counterexample shows that the statement is incoherent, so the negation must be true. And you have to construct a counterexample.
A counterexample as generally understood would be a constructive refutation: it takes ~p as a request to provide p constructively and does just that. Proof by contradiction is much more general than that. Of course the problem of extracting the residual constructive content from a proof by contradiction (explaining how it is in some sense constructing some vastly generalized counterexample) is non-trivial.
Constructivists don't call a proof of ¬p a "proof by contradiction", they just call it a proof of ¬p. To them, a "proof by contradiction" of some p that isn't in the negative fragment is just nonsense, because constructive logic doesn't have the kind of duality that even makes it necessary to talk about contradiction as a kind of proof to begin with. They'd see the classical use of "proof by contradiction" as a clunky way of saying "I've actually only proven a negative statement, and now I can use De Morgan duality to pretend that I proved a positive."
This makes it good for formal maths, but bad for philosophy, since it means it can’t encode the speculative movement
Which logic are you saying “can’t encode the speculative moment”?
I think the two logics can emulate one another? Or, at the very least, can describe what the other concludes. I know intuitionistic logic can have classical logic embedded in it through some sort of “put double negation on everything”. I think if you add some sort of modal operator to classical logic you could probably emulate intuitionistic logic in a similar way?
You don't even need to add a modal operator since modal logic itself can be embedded in classical logic via possible-world semantics. Of course the whole thing becomes a bit clunky - but that's the argument for starting with intuitionistic logic, where you wouldn't need to do that.
Any logic with LEM
Feels like all the write ups that point out the short comings of eg Python for scientific computing.
Sure, except that once you have a community at critical mass around a reasonably good tool, that trumps most other things. Momentum builds. People write tutorials, explainers, better documentation, etc. it hits escape velocity.
Feels like Lean, with Terrance Tao as a strong proponent / cheerleader, is in that space.
Everyone who argues “but language X is better” … may not be wrong, but they are not making the argument that matters. Is it better than the thing everyone else knows and can use and has more people hours going into it to improve it?
Feels like a “worse is better” situation; or maybe “good and popular is good enough”.
> once you have a community at critical mass around a reasonably good tool, that trumps most other things
This matters a lot less in the age of AI. AI doesn't need a massive number of community-built libraries, it can just write its own. It doesn't need a million tutorials floating out there on the interwebs because unlike most programmers, it will actually read the spec and documentation (tutorials are just projections of the docs/spec anyway). AI doesn't have to avoid languages with no job market because it just needs to do the job at hand, not build a career.
We need more of this.
For every "well of course, just...X, that's what everybody does" group-think argument there's a cogent case to be made for at least considering the alternatives. Even if you ultimately reject the alternatives and go with the crowd, you will be better off knowing the landscape.
It depends!
Every time you go off the beaten path, you're locking yourself into less documentation, more bugs (since there's less exploration of the dark corners), and fewer people you can go to for help. If you've got 20+ choices to make, picking the standard option is the right choice on average, so you can just do it and move on. You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
The exceptions to this are when a) it becomes apparent that the standard tool doesn't actually fit your use case, or b) the standard tool significantly overlaps the core problem you're trying to solve.
> You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
Reading that took five minutes and gave a good intro to the counter argument to Curry-Howard-all-the-things monomania. If having invested those five minutes, Lean still seems like the way to go (for whatever reason) fine. You are making a (closer to) informed choice, and likely better of than if you'd spent those five minutes doubling down on the conventional solution.
Most deviations from the group consensus are mistakes, but all progress comes from seeing past the group consensus. So making frequent small bets on peeking around your blinders is a good strategy.
Which shows the lie of the common engineering trope "use the right tool for the job."
It really should be "use the same tool that everyone else is using so you don't have decide which tool is the right one -- the herd made that decision for you!"
"I believe that almost anything that has been formalised today in any system could have been formalised in AUTOMATH. Its main drawbacks were its notation, which really was horrible, and its complete lack of automation. Proofs were long and unreadable." That's like saying that anything that could be programmed today in your modern language of choice could have been programmed 50 years ago in assembly. Technically yes, economically no.
Well, assembly languages are generally Turing complete. Not sure what the parallel would be in proof engines.
What about the performance characteristics of the Lean programs? I know it is a natively compiled language, but is the code it produces comparable to that of modern system programming languages in terms of performance?
The author appears to have a serious misconception about Lean, which is surprising since he seems to be quite knowledgeable in the area.
Specifically, the author seems to be under the impression that Lean retains proof objects and the final proof to be checked is one massive proof object, with all definitions unfolded: "these massive terms are unnecessary, but are kept anyway" (TFA). This couldn't be further from the truth. Lean implements exactly the same optimization as the author cherishes in LCF; metaphorically, that "The steps of a proof would be performed but not recorded, like a mathematics lecturer using a small blackboard who rubs out earlier parts of proofs to make space for later ones" (quoted by blog post linked from TFA). Once a `theorem` (as opposed to a `def`) is written in Lean4, then the proof object is no longer used. This is not merely an optimization but a critical part of the language: theorems are opaque. If the proof term is not discarded (and I'm not sure it isn't), then this is only for the sake of user observability in the interactive mode; the kernel does not and cannot care what the proof object was.
A proof object in dependent type theory is just the term that inhabits a type. So are you saying the Lean implementation can construct proofs without constructing such a term?
No, I'm saying it is checked and then discarded. (Or at least, discarded by the kernel. Presumably it ends up somewhere in the frontend's tactic cache.) That matches perfectly the metaphor, "rubs out earlier parts of proofs to make space for later ones".
The shared misconception seems to be in believing that because _conceptually_ the theory implemented by Lean builds up a massive proof term, that _operationally_ the Lean kernel must also be doing that. This does not follow. (Even the concept is not quite right since Lean4 is not perfectly referentially transparent in the presence of quotients.)
One of those names that forces a double take when seen disconnected from context:
'Lean or purple drank is a polysubstance drink used as a recreational drug. It is prepared by mixing prescription-grade cough or cold syrup containing an opioid drug '
proving that one of the hardest problem in CS - 'naming things' still keeps on keeping on.
It’s been decades since I could claim to know anything about this field so I’m probably completely wrong in how I read this, but the idea that one might build a theorem prover (“ML!”) for one’s non-ML programming language and have the prover itself accidentally be a really good general purpose programming language … is very funny.
Slightly off topic: This project https://agentcourt.ai/arb/analysis/index.html uses a Go/Lean hybrid design. The Go code is mostly glue, and the Lean code is the logic https://github.com/agentcourt/adjudication/tree/main/arb/eng.... It's not math-intensive. Really just functional programming with some interesting proofs (including soundness ideally). Go code can migrate to Lean code when that makes sense.
Interesting perspective. Would love to see more discussion.
>The recommended way to install Lean is through VS Code
Is that enough reason?
Good post! +1
Type theory and lean is more interesting to people who like computers than to people who like math.
I would’ve expected that people who like computers will converge around something like Idris. It’s marketed as a development tool, not a tool for formalizing mathematics even though it could be used as a proof assistant.
The set theorists decided that mathematics is the overarching superdomain over all study of structure. You don't get to pick and choose. Either mathematics is a suburb of logic and these two things are separate, or they're not and ZFC dogmatics need to accept they don't have a monopoly on math.
I of course fully support reinstating logicism, but the same dogmatics love putting up a fight over that as well.
I think the most surprising thing I've learned taking formal math in college is just how much mathematicians are pragmatists (at least for my teacher with sample size n=1). They're much more interested in new ways to think about ideas, with a side effect of proofs for correctness. The proof is more about explaining why something works, not that it does.
I'm going to take a formal logic class in the fall, and my professor said something akin to "definitely take it if you're interested, just be aware that it probably won't come in use in most of the mathematics done today." The thing is the foundations are mostly laid, and people are interested in using said foundations for interesting things, not for constantly revisiting the foundations.
I think this is one reason most mathematicians don't see a need for formal proof assistants, since from their perspective it's one very small part of math, and not the interesting one.
This is not to say that proof assistants are a dead end—I find them fascinating and hope they continue to grow—but there's a reason that they haven't had a ton of traction.
I think that's a good way of putting it. I would addend that most people working in mathematics aren't generalists, their primary interest isn't in a broad picture. Rather, most are hyperfocused into a single domain with a strong backbone of reflexive intuition built up. By virtue of sheer human limitation, there's only so much someone can care about what's happening outside of their world while still making serious contributions within it. This doesn't even just extend all the way to shifting foundations, but number theorists can hardly be expected to keep up with the forefront of graph theory, for example.
For the pragmatists Logic as a field commits the immortal sin: it blasphemes the intuition that mathematicians spend years honing by obliterating it. Not just for a singular domain, but for all domains. Of course, that doesn't really explain the whole picture. Formalism built a holy walled city. Logicians, by nature of their work, leave the safety of the walled city to survey, exploit and die in the tangled jungle outside. Some don't even speak the holy language of the glorious walled city, they talk in absolutely gibberish modalities and hyperstructures. There is a political tension held against logic and logicians as a result.
Mathematicians use logic to talk about the mathematical world. But logic is not the world.
Not even the most dogmatic of the set theorists ever argued mathematics was possible without reason, however. For mathematics, logic is the world, as the copula makes no distinction between substance and existence. In the same sense that the earth is not matter itself, but it is a material thing.
Putting that aside, to make things more clear: computer science is mathematics. Computer scientists are mathematicians. That was a categorization decided long before you and I ever lived. In the sense that you mean, you're only referring to a very small fraction of what "mathematics" refers to In the true sense of the word. It is just as irreconcilably disjointed as Logic is, not unified and fundamentally non-unifiable.
I too think it would be better if "mathematics" was reserved for the gated suburb of ZFC. But that's not the world we live in, courtesy of the same people who pushed ZFC as a foundation to begin with.
> For mathematics, logic is the world, as the copula makes no distinction between substance and existence.
No. There are truths about the subject not captured in any single formal system. Which is why objects are studied form many perspectives.
> Computer scientists are mathematicians.
Some are and some aren’t.
Terence Tao, one of the most important living mathematicians, specifically embraces Lean and has been helping the community embrace it.
What you've done here is an overgeneralization. "People who like math" and "people who like computers" are massive demographics with considerable overlap.
Formalised proofs and Lean in particular are still too cumbersome for the ``working'' mathematician to use it day-to-day for research-level math. But clearly there is some interest on where it may take us in future.
> one of the most important living
Maybe. But more clearly one of the most popular online.
He's a Field's Medalist so that's automatically one of the most important living. He is good at explaining things; I leaned on his Analysis textbooks when I was taking analysis and functional analysis to great effect; in a research class I was trying to calculate Fourier transforms of algebraic sets and found various almost throw away comments on his blog that were extremely enlightening (alas only to the extent I could follow them). He's a legit great mind of modern mathematics - and also able to communicate well; a historical rarity indeed.
The topic being discussed here has been addressed specifically in his mastodon posts - the pre-LLM math process was understood to be "state a proof, validate a proof" - but the real aim was the not explicit "digest the proof so we can extend to other results" - with LLM/lean making the first two a lot easier and possible to do without accomplishing the digestion into the mathematical community; so we need to add "digesting proofs (from where ever)" into the job description. Thread starts here: https://mathstodon.xyz/@tao/116477351524980995
citation needed, Tao certainly is on record using Lean and that carries some weight.
also, https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon... i.e. there's no reason it should be as you say.
The link is exactly what I’m saying. I only hear cs people talk about it.
For mathematicians a proof is a means to an end, or a medium of expression - they care about what they say and why.
The correspondence isn’t about C programs corresponding to proofs in math papers. It’s a very a specific claim about kinds of formal systems which don’t resemble how math or programming is done.
Mathematicians care about interesting ideas, not whether their theorems are true :-)
They care about if it’s true. But the role of the formal proof is a kind of spell checker or static analysis after they have the idea.
> They care about if it’s true.
Not always.
If it is NOT true, they sometimes simply play "what if" and construct a new system where it could be true.
> If it is NOT true, they sometimes simply play "what if" and construct a new system where it could be true.
I trust you have some examples of this?
I think they're talking about conjectures that are unproven but seem "likely true" and people build further math off the assumption. E.G Reimann hypothesis