Sunday, July 3, 2016

Proof-Theoretic Semantics for Non-Philosophers

`Proof-theoretic semantics is an alternative to truth-condition semantics.' writes Peter Schroeder-Heister in the Stanford Encyclopedia of Philosophy and he ought to know, since  he created the label for the  idea in 1991.  He  says that proof theoretic semantics is part of the tradition according to which `the meaning of a term should be explained by reference to the way it is used in our language'. Meaning-as-use, is the slogan for that, which somehow sends people to Wittgenstein and his `language games'. 

This post is merely a summary of the first part of Schroeder-Heister's  Stanford encyclopedia entry on proof-theoretical semantics. Why bother then? I  almost hear you ask. Well, I couldn't find a short account that I liked. This is not it, definitely, but it should work as a marker to continue looking. 
Also I have found many interesting articles both in Prawitz'sSchroeder-Heister's  and in Girard's webpages that might be it. But no time to read them now, with only a week before the course at NASSLLI.

Mathematicians might want to think of it as a generalization of proof theory  starting from Gentzen's work (Investigations into Logical Deduction). Gentzen famously remarked that the introduction rules in his calculus of natural deduction could be said to  define the meanings of logical constants, while the elimination rules can be obtained as a consequence of their definition.

While Hilbert is credited  with the revolutionary idea that proofs should be studied as mathematical objects themselves, Prawitz (1972)  made the original distinction between `reductive' proof theory and `general' proof theory.  While Hilbert-style “reductive proof theory”,  is the “attempt to analyze the proofs of mathematical theories with the intention of reducing them to some more elementary part of mathematics such as finitistic or constructive mathematics”, in general proof theory “proofs are studied in their own right in the hope of understanding their nature”

Meanwhile Kreisel proposes  to explain  proof theory "from a neglected point of view. Proofs and their representations by formal derivations are treated as principal objects of study, not as mere tools for analyzing the consequence relation.” (Kreisel, 1971).  Kreisel focuses on the dichotomy between a theory of proofs and a theory of provability, while Prawitz concentrates on the different goals proof theory may pursue. However, both stress the necessity of studying proofs as fundamental entities by means of which we acquire  mathematical knowledge.
Thus in general proof theory we are not only interested in whether B follows from A, but in the ways by which we arrive at B starting from A

Most forms of proof-theoretic semantics are intuitionistic in spirit, which means in particular that principles of classical logic such as the law of excluded middle or the double negation law are rejected or at least considered problematic. The main tool of proof-theoretic semantics, the calculus of natural deduction, is biased towards intuitionistic logic, in the sense that the straightforward formulation of its elimination rules is the intuitionistic one. Classical logic is only available by means of some rule of indirect proof, which, at least to some extent, destroys the symmetry of the reasoning principles. Of particular importance is its functional view of implication, according to which a proof of A → B is a constructive function which, when applied to a proof of A yields a proof of B. This functional perspective underlies many conceptions of proof-theoretic semantics, in particular those of Lorenzen, Prawitz and Martin Löf. 

Natural deduction is based on  five major ideas:
  • Discharge of assumptions: Assumptions can be “discharged” or “eliminated” in the course of a derivation, so the central notion of natural deduction is that of a derivation depending on assumptions.
  • Separation/Modularity: Each primitive rule schema contains only a single logical constant.
  • Introduction and elimination: The rules for logical constants come in pairs. The introduction rule(s) allow(s) one to infer a formula with the constant in question as its main operator, the elimination rule(s) permit(s) to draw consequences from such a formula.
  • Reduction: For every detour consisting of an introduction rule immediately followed by an elimination rule there is a reduction step removing this detour.
  • Normalization: By successive applications of reductions, derivations can be transformed into normal forms which contain no detours.
Lorenzen (1955) introduced the idea of inversion principle which says that everything that can be obtained from every defining condition of A can be obtained from A itself.
von Kutschera (1968) introduces “Gentzen semantics”, a semantics of logically complex implication-like statements A1,…,An → B with respect to calculi K which govern the reasoning with atomic sentences. 
----