© 2021 Steven Obua

Philosophy of Abstraction Logic

by Steven Obua
Cite as: https://doi.org/10.47757/pal.2
December 23rd, 2021
Abstract
Abstraction Logic has been introduced in a previous, rather technical article. In this article we take a step back and look at Abstraction Logic from a conceptual point of view. This will make it easier to appreciate the simplicity, elegance, and pragmatism of Abstraction Logic. We will argue that Abstraction Logic is the best logic for serving as a foundation of mathematics.
Table of Contents

Introduction

How can we do mathematics on a computer?
This question is bound to evoke a wide variety of responses. Many will find the question somewhat silly and point out that a computer easily handles mathematics, for example via a calculator app. Others will question whether mathematics can be done on a computer at all, because mathematics is a social process happening within the mathematical community.
As has become abundantly clear in the last two decades, computers can support and facilitate social processes, for good and bad. This aspect of mathematics does not seem to be an unsurmountable obstacle to its computerisation. Attempts like the ProofPeer [1] project have been made to overcome this obstacle, but so far none of these attempts have been truly successful.
A central and maybe the most important aspect of mathematical activity is the formulation and proving of theorems. How can we do this on a computer, beyond just writing it up in TeX [2]?
In principle, this question has been answered. We can do it by using logic as a foundation of mathematics. Logic can be implemented via software (and/or hardware), and we can then use this logic to formulate and prove theorems. A large-scale example of this is the formal proof of the Kepler conjecture [3].

What is Logic?

To allow the formulation of theorems, logic must provide a language in which to express these theorems. To make proofs of theorems possible, logic must clearly explain what a proof is, and how to certify that something is a proof of a given theorem.
There is no consensus on whether there is a single best logic to use as a foundation of mathematics. There are many different logics, and they differ in how (much) mathematics can be expressed in them. For example, the formal proof of the Kepler conjecture uses simply-typed higher-order logic (HOL) [4].
On one end of the spectrum one can find the opinion that there cannot be a single best logic, and that the best we can do is to develop methods [5] which exploit commonalities among the plethora of logics.
On the other end of the spectrum there is the opinion, championed by Quine [6], that first-order logic (FOL) is the proper foundation of mathematics. In Quine’s opinion, there is a more or less clear boundary between logic and mathematics, and other logics than FOL are not really logic, but simply mathematics dressed up as logic. He considers second-order logic (SOL) just as another way of formulating set theory, which in his opinion does not qualify as logic, but as mathematics.
Indeed, following Quine one can ask, what is the difference between logic and mathematics?
This article argues that the answer to this question is the key to understanding that there indeed is a best logic for serving as a foundation of mathematics.
The short answer is: There is a border between logic and mathematics, but it is not fixed in one place, and it can be moved. Moving the border to the right incorporates more and more specific mathematics into the logic. Because there are many different mathematical theories that can potentially be turned into logics, moving the border to the right leads to a branching out into many different logics. On the other hand, moving the border to the left reverses the effect of branching out, and leads to a single logic at the root, which we call Abstraction Logic (AL).
AL is therefore the simplest of all logics that can serve as a foundation of mathematics. At the same time, it is also the most universal one, as all other logics to the right of it can be represented in it as mathematical theories. This is why we proclaim AL to be the best foundational logic.
The rest of this article is a longer elaboration of this short answer. Instead of diving into the technical details of AL, which are explained elsewhere [7], we will focus on a conceptual understanding of AL.

Calculator Logic

We have speculated earlier that many people will associate mathematics foremost with calculation.
Although this is of course a narrow view of mathematics, it is at the same time surprisingly general: Each term in the language of an AL can be viewed as a calculation, if one is prepared to understand the notion of calculation in a broad enough sense.
Consider the following simple logic which we will call Calculator Logic (CL). A term in CL is either a decimal literal denoting natural numbers, like 4242 or 00, or an equality a=ba = b, an addition a+ba + b, a subtraction aba - b, unary minus a-a, a multiplication a×ba \times b, or a division a÷ba \div b, where both aa and bb are terms. We use the usual rules of precedence, and brackets (a)(a) to override them.
The language of CL then consists of all of these terms. The theorems of CL are those terms of CL that evaluate to true. A proof of a theorem AA consists of evaluating AA and certifying that the result is true.
Therefore 300(42+8)×2=(1000÷(5))300 - (42 + 8) \times 2 = -(1000 \div (-5)) is certainly a theorem, and 0=10 = 1 is not. The term 6×76 \times 7 is not a theorem, because it evaluates to 4242 instead of true.
We define x÷yx \div y to be the unique integer zz such that x=y×zx = y \times z. If no such integer exists, or is not unique, then the result of a division will fail. This is the case for the expression 1÷01 \div 0, which fails, and is definitely not true. Therefore 1÷01 \div 0 is not a theorem. But what about 1÷0=0÷01 \div 0 = 0 \div 0? Or 1÷0=7÷51 \div 0 = 7 \div 5? And 8÷5=7÷58 \div 5 = 7 \div 5? Are these theorems? Both the left-hand side and the right hand side of these equalities fail, and we decide that fail == fail actually evaluates to true. Thus all three equalities are theorems in CL.
What about the term (5=2+2)=4(5 = 2 + 2) = 4? Is that a theorem? No, it is not, because the left-hand side (5=2+2)(5 = 2 + 2) evaluates to false, and false is not equal to 44.
We have been quite pragmatic about the results true, false and fail. We treated them as normal values that can occur during or as the result of a calculation. We could just as well expand the definition of CL terms by introducing dedicated terms for these values: \top for true, \bot for false, and \Finv for fail. This would yield new theorems like 1÷0=1 \div 0 = \Finv and (5=2+2)=(5 = 2 + 2) = \bot. It also allows terms like 2+2 + \top. This is a term that does not seem to make much sense, so we decide that evaluating it fails and yields the theorem 2+=2 + \top = \Finv. Note that adding the terms \top, \bot and \Finv to CL does not create any issues that were not already inherent in our approach: 2+(3=3)2 + (3 = 3) also fails.
CL is not an abstraction logic, as it does not feature variables, for example. But as we will see later , it can easily be embedded into an AL such that the terms of CL correspond to a subset of the terms of the AL, and such that a term in CL is a theorem if and only if the corresponding term in AL is a theorem.
Some will recoil with horror from CL as presented here. They will say, surely something like 2+(3=3)2 + (3 = 3) should not be allowed to count as a term in the first place? The situation bears obvious similarity with the division of programming language camps into proponents of dynamically typed programming languages like Lisp vs. proponents of statically typed programming languages like ML.
But there is a fundamental difference. While in programming languages static types serve as important safe guards against programming errors, in a logic there already is an ultimate safe guard and arbiter of truth: The theorem.
This obvious fact has been twisted in a rather ingenious way by declaring types to be the same as theorems [8]. While this is certainly a viable way of approaching logic, it is by no means the only one, or even a very natural one, for that matter. Instead, using abstraction logic one can consider types as a useful mathematical theory [9], just like set theory, instead of hard-wiring types into the logic.

Values, Operations, Operators

Abstraction logic presumes that all values we want to calculate with and reason about live in a single universe U\mathcal{U}. This is the same assumption that first-order logic makes. Unlike FOL, abstraction logic makes the additional assumption that there also is a value true in U\mathcal{U}. Apart from this, AL makes no a-priori demands on U\mathcal{U}.
The assumption that there is a single universe is an important reason why AL is conceptually so simple. Any value can in principle be combined with any other value without any a-priori restrictions. This is also what makes set theory so attractive, but unlike set theory, abstraction logic has no ontological presuppositions other than that U\mathcal{U} must contain true.
Values are combined via operations and operators.
An nn-ary operation (of U\mathcal{U}) is a function taking nn arguments where each argument is a value in U\mathcal{U}, and returning a result that again is a value in U\mathcal{U}.
An nn-ary operator (of U\mathcal{U}) is a function taking nn arguments where each argument is either a value or an mm-ary operation, depending on the argument position; mm is fixed for a given position. The result of applying an operator is still always a value.
We consider any value to also be an 00-ary operation, and any nn-ary operation to also be an nn-ary operator. If we want to emphasise that an operation is not a value we call that operation proper. Similarly, we call an operator proper if it is not an operation.
Note that proper operations and operators are not part of the universe. There are simply too many of them for this to be the case if U\mathcal{U} contains more than one value, as we know due to Cantor’s theorem. Therefore all attempts to include all operations and/or all operators in the universe are doomed.
A most important difference between abstraction logic and first-order logic is that while FOL only considers arbitrary operations (in the form of functions and relations), AL also allows the use of arbitrary operators. The only proper operators allowed in FOL are \forall and \exists.

Abstraction and Shape

An abstraction is a name, intended to denote an operator. A signature is a collection of abstractions, where each abstraction is associated with a shape. A shape encodes both what kind of operator an abstraction can possibly denote, and how to interpret a term formed by applying the abstraction.
In general, a shape has the form (m;p0,,pn1)(m; p_0, \ldots, p_{n-1}) where mm is the valence and nn the arity of the shape. The pip_i are subsets of {0,,m1}\{0, \ldots, m-1\}. To avoid degenerate cases, we demand that the union of all pip_i must equal {0,,m1}\{0, \ldots, m-1\}.
We say that the operator oo is compatible with the shape (m;p0,,pn1)(m; p_0, \ldots, p_{n-1}) in all of these cases:
In all other cases oo is not compatible with the shape.
Because degenerate shapes are excluded, a value is only compatible with shape (0;)(0;), a unary operation must have shape (0;)(0; \emptyset), a binary operation must have shape (0;,)(0; \emptyset, \emptyset), and a proper unary operator necessarily has shape (1;{0})(1; \{0\}).
An abstraction can only denote operators that are compatible with its shape.

Syntax

We assume an infinite pool XX of variables.
A term, relative to a given signature, is either a variable application or an abstraction application.
A variable application has the form x[t0,,tn1]x[t_0, \ldots, t_{n-1}] where xx is a variable, and the tit_i are themselves terms. Instead of x[]x[] we usually just write xx. We say that xx occurs with arity nn.
An abstraction application has the form (ax0xm1.t0tn1)(a\, x_0 \ldots x_{m-1}.\, t_0 \ldots t_{n-1}) where aa is an abstraction belonging to the signature, mm is the valence of aa, and nn is the arity of aa. The tit_i are the terms to which the abstraction is being applied. The xjx_j are distinct variables.
Often, instead of (a.)(a.) we just write aa, instead of (a.st)(a.\, s\, t) we write sats\, a\, t when appropriate, and instead of (a.t)(a.\, t) we might write ata\, t or tat\, a, depending on whether we consider aa to be a prefix or postfix unary operator. In these cases we might disambiguate via brackets ()(\ldots) and the usual assumptions about associativity and precedence.
We consider the arity with which a variable occurs to be an implicit part of the variable name. This means that the two occurrences of the variable xx in x[x]x[x] actually refer to two different variables. We could emphasise this by writing x1[x0]x^1[x^0] instead, but we choose the simpler syntax, as no ambiguities can arise. Because of this convention, we do not need to distinguish between wellformed terms and non-wellformed terms. As long as abstraction applications use the correct arity and valence according to the signature, all terms are automatically wellformed.

Logic

A logic signature is a signature S\mathfrak{S} that contains at least three abstractions:
An abstraction logic is a logic signature S\mathfrak{S} together with a list A{\mathbb{A}} of terms called axioms.
Obviously, \top represents the value true, is implication, and \forall stands for universal quantification.
A signature S\mathfrak{S}' extends a signature S\mathfrak{S} if every abstraction of S\mathfrak{S} is also an abstraction of S\mathfrak{S}', and has the same shape in both S\mathfrak{S} and S\mathfrak{S}'.
An abstraction logic L{\mathcal{L}}' extends an abstraction logic L{\mathcal{L}} if the signature of L{\mathcal{L}}' extends the signature of L{\mathcal{L}}, and if every axiom of L{\mathcal{L}} is also an axiom of L{\mathcal{L}}' (modulo α\alpha-equivalence ).

Several Abstraction Logics

Deduction Logic LD{\mathcal{L}}_D has signature SD\mathfrak{S}_D consisting of the minimally required three abstractions, and the following axioms AD{\mathbb{A}}_D:
  1. A(BA)A ⇒ (B ⇒ A)
  2. (A(BC))((AB)(AC))(A ⇒ (B ⇒ C)) ⇒ ((A ⇒ B) ⇒ (A ⇒ C))
  3. (x.A[x])A[x](∀ x.\,A[x]) ⇒ A[x]
  4. (x.AB[x])(A(x.B[x]))(∀ x.\,A ⇒ B[x]) ⇒ (A ⇒ (∀ x.\, B[x]))
The relevance of LD{\mathcal{L}}_D is founded in the Deduction Theorem .
Axioms 2 to 5 might surprise despite their familiar form, because one would expect them to apply only to truth values, not every value in the entire universe. But because abstraction logic has only a single universe U\mathcal{U}, every value must also be considered to be a truth value, because for any term tt one can ask the question: Is tt true?
Deduction Logic with Equality LE{\mathcal{L}}_E extends LD{\mathcal{L}}_D. It has signature SE\mathfrak{S}_E which extends SD\mathfrak{S}_D by adding the equality abstraction ==. It has axioms AE{\mathbb{A}}_E, adding the following axioms to AD{\mathbb{A}}_D:
  1. x=xx = x
  2. x=y(A[x]A[y])x = y ⇒ (A[x] ⇒ A[y])
  3. A(A=)A ⇒ (A = ⊤)
LE{\mathcal{L}}_E is arguably the most important abstraction logic from a theoretical point of view. We will show that every abstraction logic extending LE{\mathcal{L}}_E is complete .
Deduction Logic with Equality and Falsity LF{\mathcal{L}}_F extends LE{\mathcal{L}}_E. Its signature SF\mathfrak{S}_F extends SE\mathfrak{S}_E by adding \bot, denoting false, logical negation ¬¬, and inequality . Its axioms AF{\mathbb{A}}_F add the following axioms to AE{\mathbb{A}}_E:
  1. =(x.x)⊥ = (∀ x.\, x)
  2. ¬A=(A)¬ A = (A ⇒ ⊥)
  3. (xy)=¬(x=y)(x ≠ y) = ¬(x = y)
The possibility of defining \bot like this has been noted as early as 1951 by Church [10], albeit Church let the quantifier range over booleans only. Much later in 2013 Kripke found it remarkable enough to exhibit it in a short note about Fregean Quantification Theory [11]. Unlike Church, and like us, Kripke lets the quantifier range over the entire universe, which he calls a Fregean domain.
Intuitionistic Logic LI{\mathcal{L}}_I extends LF{\mathcal{L}}_F. Its signature SI\mathfrak{S}_I adds abstractions to SF\mathfrak{S}_F for conjunction , disjunction , equivalence \Leftrightarrow and existential quantification . Its axioms AI{\mathbb{A}}_I are obtained by adding the following axioms to AF{\mathbb{A}}_F:
  1. (AB)A(A ∧ B) ⇒ A
  2. (AB)B(A ∧ B) ⇒ B
  3. A(B(AB))A ⇒ (B ⇒ (A ∧ B))
  4. A(AB)A ⇒ (A ∨ B)
  5. B(AB)B ⇒ (A ∨ B)
  6. (AB)((AC)((BC)C))(A ∨ B) ⇒ ((A ⇒ C) ⇒ ((B ⇒ C) ⇒ C))
  7. (AB)=((AB)(BA))(A \Leftrightarrow B) = ((A ⇒ B) ∧ (B ⇒ A))
  8. A[x](x.A[x])A[x] ⇒ (∃ x.\, A[x])
  9. (x.A[x])((x.A[x]B)B)(∃ x.\, A[x]) ⇒ ((∀ x.\, A[x] ⇒ B) ⇒ B)
Classical Logic LK{\mathcal{L}}_K extends LI{\mathcal{L}}_I with the law of the excluded middle:
  1. A¬AA ∨ ¬ A

Calculator Logic as an AL

Let us explore how Calculator Logic can be represented as an AL. Our starting point is LK{\mathcal{L}}_K. We follow an axiomatisation of arithmetic [12] described by Wang.

Primitive Abstractions

Here is the list of primitive CL abstractions we add:
01x+yxyxx×yx÷y(Num.x)x (Pos.x)x  \begin{array}{c|c} {\includegraphics[height=0.5em]{344FBFC4-5DC4-47C7-9682-7F00BE54FCD6}} & {\includegraphics[height=0.5em]{874C7733-3CF7-47F8-9E3F-E5A3B4826256}} \\\hline 0 & {\includegraphics[height=0.5em]{CCCE03B1-F6B6-40F9-84CF-6954E1F30F52}} \\ 1 & {\includegraphics[height=0.5em]{19960BAC-46F8-4FC2-82A5-5C054EFD6B8C}} \\ x + y & {\includegraphics[height=0.5em]{89F41138-F4FB-41B0-BC48-70B32CCE65FC}} \\ x - y & {\includegraphics[height=0.5em]{ED8BF189-550D-4C67-BF0F-F32403E3962D}} \\ - x & {\includegraphics[height=0.5em]{1313AF99-D435-4C25-AB7D-B285F94D338F}} \\ x \times y & {\includegraphics[height=0.5em]{CFFF785E-DE4F-4C5A-8A5F-0FA12C817CDC}} \\ x \div y & {\includegraphics[height=0.5em]{C0474A45-113E-45F5-9C40-7EB2A6E0D7BD}} \\ (\text{Num}.\, x) & x\ {\includegraphics[height=0.5em]{98184A52-C7C8-4940-9296-77299396D831}} \\ (\text{Pos}.\, x) & x\ {\includegraphics[height=0.5em]{BA33B9F3-5B96-4605-936F-DF5D0EB80FEC}} \\ \end{array}
Note that as long as abstractions have different arities and/or valence, as subtraction and minus do, they can share the same symbol and still be told apart. Just as we do for variables, we treat valence and arity of an abstraction as part of its name.

Definitions

We add derived CL abstractions via definitions  of the form ABA ≝ B, where AA is an application of an abstraction aa. This introduces aa as an abstraction, and adds A=BA = B to the list of axioms. The shape of aa will be obvious from the structure of AA.
(Num.xy)(Num.x)(Num.y)(Num.xyz)(Num.x)(Num.y)(Num.z)(closedx.D[x]F[x])(x.D[x]D[F[x]])(closedxy.D[x]F[x,y])(xy.D[x]D[y]D[F[x,y]])(commxy.F[x,y])(xy.(Num.xy)F[x,y]=F[y,x])(assocxy.F[x,y])(xyz.(Num.xyz)(F[x,F[y,z]]=F[F[x,y],z])(distrxy.F[x,y]G[x,y])(xyz.(Num.xyz)(F[G[x,y],G[x,z]]=G[x,F[y,z]])(right-neutralxy.F[x,y]e)(Num.e) (x.(Num.x)F[x,e]=x)(either-or.ABC)(ABC)¬(AB)¬(AC)¬(BC) \begin{array}{rcl} (\text{Num}.\, x\, y) & ≝ & (\text{Num}.\, {x}) ∧ (\text{Num}.\, {y}) \\ (\text{Num}.\, x\,y\, z) & ≝ & (\text{Num}.\, {x}) ∧ (\text{Num}.\, {y}) ∧ (\text{Num}.\, {z})\\ (\operatorname{closed} x.\, D[x]\, F[x]) & ≝ & (\forall x.\, D[x] ⇒ D[F[x]]) \\ (\operatorname{closed} x \,y.\, D[x]\, F[x, y]) & ≝ & (\forall x\, y.\, D[x] ∧ D[y] ⇒ D[F[x, y]]) \\ (\operatorname{comm} x \,y.\, F[x, y]) & ≝ & (\forall x\, y.\, (\text{Num}.\, x\, y) ⇒ F[x, y] = F[y, x]) \\ (\operatorname{assoc} x \,y.\, F[x, y]) & ≝ & (\forall x\, y\, z.\, (\text{Num}.\, x\,y\, z) ⇒ \\ & & \phantom{(\forall}F[x, F[y, z]] = F[F[x, y], z]) \\ (\operatorname{distr} x \,y.\, F[x, y]\, G[x, y]) & ≝ & (\forall x\, y\, z.\, (\text{Num}.\, x\,y\, z) ⇒ \\ & & \phantom{(\forall}F[G[x, y], G[x, z]] = G[x, F[y, z]]) \\ (\operatorname{right-neutral} x\,y.\, F[x,y]\, e) & ≝ & (\text{Num}.\, e)\ ∧ \\ & & (\forall x.\, (\text{Num}.\, x) ⇒ F[x, e] = x) \\ (\text{either-or}.\, A\, B\, C) & ≝ & (A ∨ B ∨ C)\, ∧ \\ & & ¬(A ∧ B) ∧ ¬(A ∧ C) ∧ ¬(B ∧ C) \end{array}

Literals

The purpose of the following definitions is to support decimal literals of integers: 21+132+143+154+165+176+187+198+1(dec.xy)(9+1)×x+y \begin{array}{rcl} 2 & ≝ & 1 + 1 \\ 3 & ≝ & 2 + 1 \\ 4 & ≝ & 3 + 1 \\ 5 & ≝ & 4 + 1 \\ 6 & ≝ & 5 + 1 \\ 7 & ≝ & 6 + 1 \\ 8 & ≝ & 7 + 1 \\ 9 & ≝ & 8 + 1 \\ (\text{dec}.\, x\, y) & ≝ & (9 + 1) \times x + y \end{array} E.g. the literal 5710457104 can then be understood as syntactic sugar for (dec.(dec.(dec.(dec.57)1)0)4)(\text{dec}.\, {(\text{dec}.\, {(\text{dec}.\, {(\text{dec}.\, 5\, 7)}\, 1)}\, 0)}\, 4)

Axioms

The remaining axioms of CL are:
The last axiom is the axiom of induction. It can be represented in AL effortlessly.

Undefinedness

The axioms we have introduced so far cover the standard behaviour of arithmetic, but do not say anything about terms like 2+2 + \top or 1÷01 \div 0. This can be an advantage, because it means the theorems proven for this abstraction logic are applicable in more contexts. For example, there might be a context in which we would want 1÷0=1 \div 0 = \infty to hold. Or maybe we would prefer to leave division by zero mostly unspecified except for requiring (Num.x)(Num.x÷0)(\text{Num}.\, x) ⇒ (\text{Num}.\, {x \div 0}). Therefore, we might want to leave terms like this undefined on purpose.
The disadvantage of undefined terms is that we cannot use theorems to prove that they are undefined; this is because they are undefined, and we cannot say much about them. This is where type systems shine: many undefined terms are automatically rejected because type checking has proven that the term is illtyped.
To emulate this feature of type systems, we would like to be able to prove a theorem of the form “tt is undefined”. We cannot really do that, as explained before. But we can arrange things so that we can prove a theorem of the form “tt is an error”. Simplifying further, we decide that there is a single error fail, which we denote by \Finv. We can then show that tt fails by proving the theorem t=t = \Finv.
We introduce the following tools to deal with errors:
With these, we make the following changes to CL:
This says that the only values in the universe are booleans, numbers, and fail. Furthermore, all arithmetical operations fail if not applied to numbers. Division fails if it is not uniquely determined.
Farmer proposes a traditional approach to undefinedness [13] as a basis for formalising undefinedness. He mentions that undefined could be modelled as a third truth value besides \top and \bot, but warns that
[...] it is very unusual in mathematical practice to use truth values beyond true and false. Moreover, the presence of a third truth value makes the logic more complicated to use and implement.
Our approach to undefinedness resolves these issues. There is a third value \Finv with \Finv ≠ \top and \Finv ≠ \bot, but logically, it is the same as \bot, because we can prove \Finv \Leftrightarrow \bot. Furthermore, there is no need to change the implementation of the logic at all, as all we needed were a few additional abstractions and axioms.

Semantics

A valuation ν\nu in U\mathcal{U} is a function that assigns to each variable xx in XX occurring with arity nn an nn-ary operation ν(x,n)\nu(x, n) of U\mathcal{U}. Given a valuation ν\nu in U\mathcal{U}, distinct variables x0,,xk1x_0, \ldots, x_{k-1} and values u0,,uk1u_0, \ldots, u_{k-1} in U\mathcal{U}, we define an updated valuation via ν[x0u0,,xk1uk1](y,n)={uifor y=xi and n=0ν(y,n)otherwise\nu[x_0 \coloneqq u_0, \ldots, x_{k-1} \coloneqq u_{k-1}](y, n) = \begin{cases}u_i & \text{for}\ y = x_i\ \text{and}\ n = 0 \\ \nu(y, n) & \text{otherwise}\end{cases}
An abstraction algebra is a universe U\mathcal{U} together with a signature S\mathfrak{S} and an interpretation I(a)I(a) of each abstraction aa in S\mathfrak{S} as an operator of U\mathcal{U} compatible with the shape of aa. We say that aa denotes I(a)I(a) (in the given abstraction algebra).
Abstraction algebras are a generalisation of what Rasiowa calls abstract algebras [14]. An abstract algebra is an abstraction algebra that only contains abstractions with zero valence. Rasiowa uses abstract algebras as models for propositional calculi, i.e. to provide an algebraic semantics for propositional logics. We take this a step further and use abstraction algebras to provide models for abstraction logics.
Given such an abstraction algebra, and a valuation ν\nu in U\mathcal{U}, every term tt denotes a value t\llbracket t \rrbracket in the universe U\mathcal{U}. The calculation of this value is straightforward, and recurses over the structure of tt:
Two terms ss and tt are called α\alpha-equivalent with respect to a given signature S\mathfrak{S} iff for all abstraction algebras A\mathfrak{A} with this signature, and all valuations in (the universe of) A\mathfrak{A}, ss and tt denote the same value. It can easily be checked whether two terms are α\alpha-equivalent by first computing their de Bruijn representations, and then checking if the two representations are identical. A proof of this can be found in the original AL report [7] under the moniker “equivalence of α\alpha-equivalence and semantical equivalence”.
A variable xx occurs free with arity nn in a term tt if there is an occurrence of xx with arity nn in tt that is not bound by any surrounding abstraction. Because abstractions bind only variables of arity 00, any occurrence of xx in tt with arity n>0n > 0 is free.
It is clear that the value of tt depends only on the assignments in the valuation to those variables xx which are free in tt. Therefore the value of a closed term tt, i.e. a term without any free variables, does not depend on the valuation at all, but only on the abstraction algebra in which the calculation takes place.

Substitution

An nn-ary template has the form [x0xn1.t][x_0 \ldots x_{n-1}.\, t], where the xix_i are distinct variables and tt is a term. The template binds all free occurrences of xix_i in tt of arity 00. A 00-ary template [.t][.\, t] is usually just written tt.
A substitution σ\sigma is a function defined on a domain DD that maps a variable xx, given an arity nn such that (x,n)(x, n) belongs to DD, to an nn-ary template σ(x,n)\sigma(x, n).
The purpose of a substitution σ\sigma is to be applied to a term tt, yielding another term t/σ{t} / {\sigma} as the result of the substitution. This is done by determining those free occurrences of xx in tt of arity nn that are in σ\sigma’s domain, and replacing them with σ(x,n)\sigma(x, n). This means that expressions of the form [x0xn1.t][s0,,sn1][x_0 \ldots x_{n-1}.\, t][s_0, \ldots, s_{n-1}] must be resolved to actual terms when replacing variables occurring with non-zero arity. This can in turn be done by applying to tt the substitution that maps xix_i to sis_i.
The usual caveats of substitution in the presence of bound variables apply. For example, assume σ\sigma maps xx to yy. Then applying σ\sigma to (ay.(b.xy))(a\, y.\, (b.\, x\, y)) by performing a direct replacement of xx with yy would yield (ay.(b.yy))(a\, y.\, (b.\, y\, y)). This is not what we want, as can be seen as follows. Note first that (ay.(b.xy))(a\, y.\, (b.\, x\, y)) is α\alpha-equivalent to (az.(b.xz))(a\, z.\, (b.\, x\, z)), i.e. the meaning of those two terms is the same in all abstraction algebras with respect to all valuations. Therefore it should not matter to which of the two terms we apply the substitution to, the results should be the same, or at least α\alpha-equivalent. But applying σ\sigma to the second term (again via direct replacement) yields (az.(b.yz))(a\, z.\, (b.\, y\, z)), which is not α\alpha-equivalent to (ay.(b.yy))(a\, y.\, (b.\, y\, y)).
These issues can be dealt with by defining substitution via de Bruijn terms instead of terms. The details of this have been worked out in the original AL report [7]. For our purposes here we will just note that we can deal with these issues by renaming bound variables appropriately before performing replacement to avoid clashes between free and bound variables. Also note that there is no canonical result of applying a substitution, but that the result is determined only up to α\alpha-equivalence.

Substitution as Valuation

The main property of substitution is that, given a background valuation ν\nu, we can turn any substitution σ\sigma into a valuation νσ\nu_\sigma. The role of the background valuation is to provide meaning for free variables that are either not in the domain of σ\sigma, or free in some template of σ\sigma.
The valuation νσ\nu_\sigma has the property that for any term tt tσ=t/σ \llbracket {t} \rrbracket_\sigma = \llbracket {{t} / {\sigma}} \rrbracket holds. Here \llbracket {\cdot} \rrbracket denotes evaluation with respect to ν\nu, and σ\llbracket {\cdot} \rrbracket_\sigma calculates the value with respect to νσ\nu_\sigma.
We define νσ\nu_\sigma as follows for a variable xx occurring with arity nn:
Further details and a proof of above property using de Bruijn terms can be found in the original AL report [7].

Models

What is the meaning of an abstraction logic? Terms have been given meaning relative to an abstraction algebra and a valuation earlier . Correspondingly, a model of an abstraction logic L{\mathcal{L}} with signature S\mathfrak{S} and axioms A{\mathbb{A}} is almost an abstraction algebra A{\mathfrak{A}} with the same logic signature S\mathfrak{S}, a universe U\mathcal{U}, and an interpretation II, such that we have for all valuations ν\nu in U\mathcal{U} and all axioms aa in A{\mathbb{A}} a=I()\llbracket {a} \rrbracket = I(\top) There are two important modifications to be made before arriving at the actual definition of a model:
  1. We are not interested in all possible abstraction algebras as models, but only in those that are logic algebras.
  2. We are not interested in testing the axioms with all possible valuations, but only with those that belong to a space of valuations depending on the model itself.
To address the first point, we define a logic algebra to be an abstraction algebra with logic signature S\mathfrak{S} such that and \forall have denotations satisfying the following minimum requirements:
To address the second point, we define a valuation space V\mathcal{V} of an abstraction algebra A{\mathfrak{A}} to be a collection of valuations in the universe of A{\mathfrak{A}} such that:
These requirements for V\mathcal{V} will turn out to be just strong enough to show soundness  of any abstraction logic, and weak enough to prove completeness  for any abstraction logic extending LE{\mathcal{L}}_E.
A model of an abstraction logic L{\mathcal{L}} is then a pair (A,V)({\mathfrak{A}}, \mathcal{V}) such that A{\mathfrak{A}} is a logic algebra with the same signature as L{\mathcal{L}}, and V\mathcal{V} is a valuation space of A{\mathfrak{A}}, such that a=I()\llbracket {a} \rrbracket = I(\top) for all valuations ν\nu in V\mathcal{V} and all axioms aa in A{\mathbb{A}}.
A model is called degenerate if the universe of A{\mathfrak{A}} consists of a single element. Note that every abstraction logic has a degenerate model.
If every model of L{\mathcal{L}} is also a model of the logic with the same signature as L{\mathcal{L}} and the single axiom tt, then we say that tt is valid in L{\mathcal{L}} and write Lt{\mathcal{L}} ⊨ t

Proofs

Given a logic L{\mathcal{L}} with signature S\mathfrak{S} and axioms A{\mathbb{A}}, a proof in L{\mathcal{L}} of a term tt is one of the following:
  1. An axiom invocation AX(t)\operatorname{AX}(t) such that tt is an axiom in A{\mathbb{A}}.
  2. An application of substitution SUBST(t,σ,ps)\operatorname{SUBST}(t, \sigma, p_s) where
  3. An application of modus ponens MP(t,ph,pg)\operatorname{MP}(t, p_h, p_g) such that
  4. Introduction of the universal quantifier ALL(t,x,ps)\operatorname{ALL}(t, x, p_s) where
If there exists a proof of tt in L{\mathcal{L}}, we say that tt is a theorem of L{\mathcal{L}} and write Lt {\mathcal{L}} ⊢ t

The Deduction Theorem

The Deduction Theorem for abstraction logic is that any proof of AA in a logic L+{\mathcal{L}}^+ with axioms A{\mathbb{A}} and the additional axiom ϕ\phi can be translated into a proof of ϕA\phi' ⇒ A in the logic L{\mathcal{L}} with the same signature, but only axioms A{\mathbb{A}}, as long as L{\mathcal{L}} extends LD{\mathcal{L}}_D.
Here ϕ\phi' is the universal closure of ϕ\phi, i.e. ϕ=x0.x1.xn1.ϕ\phi' = ∀ x_0. ∀ x_1. \ldots ∀ x_{n-1}.\, \phi where the xix_i are the distinct free variables of ϕ\phi. Note that this implies that ϕ\phi cannot contain free variables of non-zero arity for the Deduction Theorem to apply without any further conditions.
The proof of the Deduction Theorem is simply by recursing over the structure of the original proof and translating it step by step.
A special case of a stronger version of the Deduction Theorem is LDAA{\mathcal{L}}_D ⊢ A ⇒ A We can prove this directly: Substituting AA for CC in Axiom 3  and then applying modus ponens using Axiom 2  yields the theorem (AB)(AA)(A ⇒ B) ⇒ (A ⇒ A). Substituting BAB ⇒ A for BB and applying modus ponens using Axiom 2  again, we obtain the theorem AAA ⇒ A.
In a system for machine-assisted theorem proving like Practal [15] it makes sense to bake in native support for LD{\mathcal{L}}_D so that moving theorems from L+{\mathcal{L}}^+ to L{\mathcal{L}} doesn’t necessarily involve the actual translation of proofs.

Soundness

Abstraction logic is sound, i.e. every theorem of L{\mathcal{L}} is also valid in L{\mathcal{L}}: LtimpliesLt{\mathcal{L}} ⊢ t \quad\text{implies}\quad {\mathcal{L}} ⊨ t We show by induction over pp that if pp is a proof of tt in L{\mathcal{L}}, then tt is valid in L{\mathcal{L}}:

Completeness

An abstraction logic L{\mathcal{L}} is called complete if every valid term is also a theorem: LtimpliesLt {\mathcal{L}} ⊨ t \quad \text{implies}\quad {\mathcal{L}} ⊢ t
If L{\mathcal{L}} extends LE{\mathcal{L}}_E, then L{\mathcal{L}} is complete.
To prove this, we construct the Rasiowa Model (R,V)(\mathfrak{R}, \mathcal{V}) of L{\mathcal{L}}, which is a model for L{\mathcal{L}}. The construction relies heavily on the fact that L{\mathcal{L}} extends LE{\mathcal{L}}_E. The universe of the Rasiowa Model consists of equivalence classes of terms, where two terms ss and tt are called equivalent if s=ts = t is a theorem of L{\mathcal{L}}. This is a variation of a standard approach to prove completeness, originally due to Lindenbaum and Tarski. We write [t]R{[t]_{\mathfrak{R}}} for the equivalence class to which a term tt belongs. The details of the construction can be found in the original AL paper [7] (the Rasiowa Model is called Rasiowa Abstraction Algebra there, which is a misnomer). In the construction we make essential use of the fact that V\mathcal{V} does not need to be the class of all valuations, but can be custom tailored as long as we can guarantee that V\mathcal{V} has the properties required of a valuation space.
Most importantly, there exists a valuation in V\mathcal{V} with t=[t]R\llbracket {t} \rrbracket = {[{t}]_{\mathfrak{R}}} for any term tt, where \llbracket {\cdot} \rrbracket evaluates with respect to that valuation. Furthermore, for the interpretation II of R\mathfrak{R} we have I()=[]RI(\top) = {[{\top}]_{\mathfrak{R}}}
From this we can see immediately that L{\mathcal{L}} is complete. Because assume that tt is valid. Then [t]R=t=I()=[]R{[t]_{\mathfrak{R}}} = \llbracket t \rrbracket = I(\top) = {[{\top}]_{\mathfrak{R}}} which shows that t=t = \top is a theorem of L{\mathcal{L}}, and thus tt is a theorem as well.

Consistency

An abstraction logic L{\mathcal{L}} is called inconsistent if all terms are also theorems, and consequently consistent if there is at least one term which is not a theorem.
If L{\mathcal{L}} is an extension of a deduction logic LD{\mathcal{L}}_D, then inconsistency of L{\mathcal{L}} is equivalent to Lx.x{\mathcal{L}} ⊢ ∀ x.\, x This is easy to see. Assume x.x∀ x.\,x is a theorem in L{\mathcal{L}}. Substituting [x.x][x.\, x] for AA in Axiom 4  of LD{\mathcal{L}}_D and applying modus ponens, it follows that xx is a theorem. Substituting any term tt for xx shows that tt is a theorem.
If an abstraction logic L{\mathcal{L}} is inconsistent, then all models of L{\mathcal{L}} are degenerate. Assume L{\mathcal{L}} is inconsistent. It follows that xx is a theorem for any variable xx of arity zero. That means that xx is valid in any model (A,V)({\mathfrak{A}}, \mathcal{V}) of L{\mathcal{L}}. In particular, for any valuation ν\nu in V\mathcal{V} we have x=I()\llbracket x \rrbracket = I(⊤). There is such ν\nu because V\mathcal{V} is non-empty. For any value uu in the universe U\mathcal{U} of A{\mathfrak{A}} the valuation ν[xu]\nu[x \coloneqq u] is in V\mathcal{V} as well, and therefore u=xxu=I()u = \llbracket {x} \rrbracket_{x \coloneqq u} = I(⊤). That means U\mathcal{U} consists only of I()I(⊤), thus any model of L{\mathcal{L}} is degenerate.
What about the other direction? If every model of an abstraction logic L{\mathcal{L}} is degenerate, then t=I()\llbracket t \rrbracket = I(⊤) holds for any term tt with respect to any valuation. Therefore Lt{\mathcal{L}} ⊨ t But is tt then a theorem of L{\mathcal{L}}? We have seen that this is indeed the case if L{\mathcal{L}} extends LE{\mathcal{L}}_E, because then L{\mathcal{L}} is complete .
Therefore, if L{\mathcal{L}} extends LE{\mathcal{L}}_E, then inconsistency of an abstraction logic L{\mathcal{L}} is equivalent to all of its models being degenerate.
It is thus straightforward to see that LK{\mathcal{L}}_K is consistent. As a model, use a two-element Boolean Algebra [16] extended in the obvious way with operators denoting \forall and \exists. The axioms of LK{\mathcal{L}}_K can then be checked by simply exhausting all possibilities.

Definitions

A head hh for an abstraction aa is an abstraction application of the form (ax0xm1.t0tn1)(a\, x_0\ldots x_{m-1}.\, t_0 \ldots t_{n-1}) such that:
The shape of aa can easily be determined from a head for aa. It must be (m;p0,,pn1)(m; p_0, \ldots, p_{n-1}), where pi={j0,,jk1}p_i = \{{j_0}, \ldots, {j_{k-1}}\} given the above form of tit_i.
Given a logic L{\mathcal{L}} extending LE{\mathcal{L}}_E, a definition has the form hth ≝ t where hh is a head for an abstraction aa not contained in L{\mathcal{L}}, and tt is a term of L{\mathcal{L}} such that all of tt’s free variables are also free variables of hh.
A definition results in a logic L{\mathcal{L}}' which is an extension of L{\mathcal{L}}. The signature of L{\mathcal{L}}' is that of L{\mathcal{L}} extended by the abstraction aa. The axioms of L{\mathcal{L}}' are the axioms of L{\mathcal{L}} plus the axiom h=th = t.
Furthermore, L{\mathcal{L}}' is a conservative extension of L{\mathcal{L}}, i.e. L{\mathcal{L}}' is not only an extension of L{\mathcal{L}}, but if tt is a term of L{\mathcal{L}}, then tt is a theorem of L{\mathcal{L}} iff tt is a theorem of L{\mathcal{L}}'. This is straightforward to prove by induction over the structure of proofs, and considering the more general statement that tt' is a theorem of L{\mathcal{L}} iff tt is a theorem of L{\mathcal{L}}'. Here tt' results from tt by rewriting away all occurrences of aa using the equation h=th = t.
Therefore if L{\mathcal{L}}' is inconsistent, then L{\mathcal{L}} is inconsistent as well, because x.x\forall x.\, x is a term of L{\mathcal{L}}. Consequently, L{\mathcal{L}} is consistent iff L{\mathcal{L}}' is consistent.
We can also allow conditional definitions of the form chtc ⇒ h ≝ t by considering this as an abbreviation for h(if.ct)h ≝ (\text{if}.\, c\, t) where the abstraction “if\text{if}” is governed by the single axiom (if.x)=x(\text{if}.\, \top\, x) = x, which is assumed to be already contained in L{\mathcal{L}}. This allows us to derive ch=tc ⇒ h = t in L{\mathcal{L}}'.

Rasiowa’s Algebraic Approach

Abstraction logic was born out of the desire to find solid logical foundations for Practical Types [9]. Discussions on the Lean Zulip chat with Mario Carneiro and others left me convinced that while Practical Types were basically the right approach, in order to make a convincing case I needed to be able to explain the logical foundations with a level of detail that I was not able to at that moment.
During my search for logical foundations for Practical Types, I came across the work of . The algebraic approach to logic [14] as Rasiowa applied it to propositional logic made immediate sense to me. Rasiowa chose to generalise this approach to predicate logic by following Mostowski and interpreting quantifiers as least upper bounds and greatest lower bounds. To me, that does not seem to be the proper way to generalise Rasiowa’s approach. Instead, I believe abstraction logic is. Just like Rasiowa restricts her models by restricting abstract algebras to implicative algebras to ensure that implication satisfies modus ponens, AL meets minimal requirements of implication and universal quantification by restricting abstraction algebras to logic algebras.
Why did Rasiowa miss this generalisation, which now seems obvious with hindsight? The reason may simply be that operators were not part of her normal toolbox, while working with higher-order functions is standard in both programming and mechanised theorem proving today. Another possible reason is that working with a Fregean domain [11] might have felt alien to her. But on the other hand, her models for propositional logic are topological spaces, and it must have felt somewhat of a waste of space to fill them just with mere truth values. As we can see now, these spaces can represent whole universes.
Our approach seems to be conceptually simpler and less involved than Rasiowa’s approach to predicate logics. An important indicator for this is how simple, elegant and without any need to exclude special cases our completeness proof for abstraction logic is (once the Rasiowa Model is constructed). Completeness holds for all abstraction logics from LE{\mathcal{L}}_E on, including intuitionistic abstraction logic, classical abstraction logic, and all logics in between and beyond. On the other hand, completeness for predicate logics based on Rasiowa’s approach is in general complicated territory [17]. Ono concludes his paper [18] from 1995 on the completeness of predicate logics with respect to algebraic semantics with the words
[...] there seems to be a big difference between instantiations in logic and in algebra. In other words, though usually we interpret quantifiers as infinite joins and meets in algebraic semantics, i.e., we deal with quantifiers like infinite disjunctions and conjunctions, this will be not always appropriate. So, one of the most important questions in the study of algebraic semantics for predicate logics would be how to give an appropriate interpretation of quantifiers, in other words, how to extend algebraic semantics.
It seems that abstraction logic just might be this extension of algebraic semantics that Ono was looking for.

Propositional Logic

Given that AL is a generalisation of algebraic semantics for propositional logics, we can formulate all propositional logics from Rasiowa’s book [14] as ALs. As an example, we do this for positive implicative logic.
There is a caveat though: \top and \forall are part of any AL by definition, but these abstractions are not present in positive implicative logic. We will not be bothered by this, and just assume the following two axioms:
The first axiom is obvious. The second axiom effectively renders \forall a complicated way of writing \top.
The remaining axioms of positive implicative logic are:
The semantics of this AL is identical to the semantics given by Rasiowa. Rasiowa shows that this logic is both complete and consistent.
Another approach is to define propositional logic as an abstraction logic such that the universe only consists of propositions. Starting from LI{\mathcal{L}}_I or LK{\mathcal{L}}_K, we add the axiom (AB)(A=B)(A \Leftrightarrow B) ⇒ (A = B) and obtain intuitionistic propositional AL LIP{\mathcal{L}}_{IP} or classical propositional AL LKP{\mathcal{L}}_{KP}, respectively. Completeness follows from both LIP{\mathcal{L}}_{IP} and LKP{\mathcal{L}}_{KP} extending LE{\mathcal{L}}_E. Consistency can be confirmed as before for LK{\mathcal{L}}_K .

Paraconsistent Logic

AL will be hard-pressed to implement any paraconsistent logic, as AL assumes there being only a single value representing true. This is at odds with Priest’s 3-valued Logic of Paradox [19], for example, which has an additional value paradoxical which is also “true”. In particular, any AL extending LF{\mathcal{L}}_F has the principle of explosion as a theorem: A((¬A)B) A ⇒ ((¬ A) ⇒ B) It should be possible to adapt abstraction logic to allow multiple different values all representing “true”. But as no paraconsistent logic provides a widely accepted foundation of mathematics, there seems no need to accommodate paraconsistency at this point.

First-Order Logic

To represent first-order logic (FOL) as an abstraction logic, we should be able to use the same method as for Calculator Logic . But instead of just numbers “Num\text{Num}”, we embed an entire FOL-universe “{\includegraphics[height=0.5em]{FBED8125-6F35-4327-9066-6D87A363FBCD}}” in our AL-universe U\mathcal{U}. As for Calculator Logic, we build FOL on top of LK{\mathcal{L}}_K, and we model undefinedness  via \Finv. FOL operators would be defined in such a way that illformed FOL formulas evaluate to \Finv. We will not flesh out this construction in detail at this point, but leave this for another time.

The Best Logic?

I hope a picture is forming by now why I believe Abstraction Logic to be the best logic. It is doubtful that this claim can ever be proven. But it doesn’t need to be. I mean best logic in a very practical sense, and therefore if the claim is indeed true, Abstraction Logic should be able to form the basis of a theorem proving system which surpasses other systems based on just FOL, set theory, HOL, or dependent types. In fact, it should be able to accommodate all of these foundations, as they all should be axiomatisable in Abstraction Logic.
So, let’s build Practal and see!

References

[1]Steven Obua, Jacques Fleuriot, Phil Scott, David Aspinall. (2014). ProofPeer: Collaborative Theorem Proving, https://arxiv.org/abs/1404.6186.
[2]Donald E. Knuth. (1986). The TeXbook, Addison-Wesley.
[3]Thomas Hales et al. (2017). A Formal Proof of the Kepler Conjecture, https://doi.org/10.1017/fmp.2017.1.
[4]Alonzo Church. (1940). A formulation of the simple theory of types, https://doi.org/10.2307/2266170.
[5]Florian Rabe. (2015). The Future of Logic: Foundation-Independence, https://doi.org/10.1007/s11787-015-0132-x.
[6]W. V. Quine. (1986). Philosophy of Logic, https://doi.org/10.2307/j.ctvk12scx.
[7]Steven Obua. (2021). Abstraction Logic, https://doi.org/10.47757/abstraction.logic.2.
[8]Philip Wadler. (2015). Propositions as types, https://doi.org/10.1145/2699407.
[9]Steven Obua. (2021). Practical Types, https://doi.org/10.47757/practical.types.1.
[10]Alonzo Church. (1951). A formulation of the logic of sense and denotation, https://archive.org/details/structuremethodm0000henl.
[11]Saul A. Kripke. (2013). Fregean Quantification Theory, https://doi.org/10.1007/s10992-013-9299-x.
[12]Hao Wang. (1957). The Axiomatization of Arithmetic, https://doi.org/10.2307/2964176.
[13]William M. Farmer. (2004). Formalizing Undefinedness Arising in Calculus, https://doi.org/10.1007/978-3-540-25984-8_35.
[14]Helena Rasiowa. (1974). An algebraic approach to non-classical logics, https://doi.org/10.1016/s0049-237x(09)x7004-7.
[15]Steven Obua. (2021). Practal — Practical Logic: A Bicycle for Your Mathematical Mind, https://doi.org/10.47757/practal.1.
[16]Paul Halmos, Steven Givant. (2009). Introduction to Boolean Algebras, https://doi.org/10.1007/978-0-387-68436-9.
[17]Petr Cintula, Carles Noguera. (2015). A Henkin-Style Proof of Completeness for First-Order Algebraizable Logics, https://doi.org/10.1017/jsl.2014.19.
[18]Hiroakira Ono. (1995). Algebraic semantics for predicate logics and their completeness, https://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/0927-7.pdf.
[19]Graham Priest. (1979). The Logic of Paradox, https://doi.org/10.1007/BF00258428.