Boolean algebras

In mathematics, our understanding often progresses from more or less concrete examples to abstract concepts, from particular cases to the general one, from something we can understand using our everyday intuition and experience to something whose understanding requires rigorous assumptions and meticulous proofs. Thus, for example, our intuitive notion of change is captured in the definition of the derivative, which itself is a particular example of a linear operator. Similarly, we can immediately see that a square is ‘more symmetric’ than a parallelogram; it is then the task of mathematics to make this precise. One of the powers of abstraction lies in its generality, once a problem is solved on an abstract level, we are immediately freed from solving many individual problems. Perhaps more importantly for mathematics, abstraction simplifies many concepts by removing the irrelevant details, thus allowing us to see the essence of the concept. This movement from the concrete to the abstract will be the guiding principle in this blog post, as I will attempt to motivate Boolean algebras.

In the previous posts, we have discussed what is meant by classical first-order logic and how linear spaces can be used to break the distributivity law, demonstrating that ‘the logic of linear spaces’ does not form a classical logic. The linear spaces were in turn motivated by the fact that they form the state space for quantum mechanics. This time we will take a step back in some sense, we will ignore the fact that first-order logic has quantifiers, and consider only the logical propositions formed using ‘and’, ‘or’ and ‘not’1, we will call it propositional logic.

Although the distributive law does not hold for linear spaces, it looks like whatever ‘the logic of linear spaces’ is, it is not too far from the propositional logic; for example, it still makes sense to ask whether something is not in a given linear space, or whether something is in a linear space A or in a linear space B. This is a typical example of a more general concept lurking behind the corner, we have two very similar things, which disagree in one or two properties, the task is to find what exactly are the similarities of the two.

We begin by defining a bounded lattice, if this feels too boring or too technical, just skip the axioms! Although I have used the symbols \neg, \land and \lor before, for now let’s forget they had any meaning, the only properties they have are the ones derivable from the axioms we are about to introduce. Let L be some set, we will denote the elements of the set by small letters x, y, z, .... Let \land and \lor be some operations on L, that is, for any elements x and y in L both x \land y and x \lor y are again elements of L. We say that such set is a lattice if the following axioms hold.

(1)  x \lor x = x,    x \land x = x

(2)  x \lor y = y \lor x,    x \land y = y \land x

(3)  x \lor (y \lor z) = (x \lor y) \lor z,    x \land (y \land z) = (x \land y) \land z

(4)  (x \lor y) \land y = y,    (x \land y) \lor y = y

In short, the axioms are saying that the operations \land and \lor must be (1) redundant when the input on both sides is the same; (2) symmetric; and (3) associative (i.e. the order of the operations doesn’t matter). The axiom (4) is called absorption. A lattice L is said to be bounded if there are elements 0 and 1 in L so that for any element x in L the following holds.

(5)  x \lor 0 = x,    x \land 1 = x

Note that from (4) and (5) it follows that x \land 0 = 0 and x \lor 1 = 1. In this case 0 is called the bottom element of L and 1 the top element of L, the reason for this terminology will become clear via examples. We now observe that both propositional logic and linear spaces satisfy the axioms (1)-(5).2 For propositional logic, the set L is just the set of all logical propositions, \land the logical ‘and’, \lor the ‘or’, 0 = false is any sentence which is always false and 1 = true any sentence which is always true.

For the logic of linear spaces, L is a linear space, continuing the example of the previous post, we can take this to be a plane. Recall that \land is defined as the intersection of two spaces (which are mostly lines in our examples), and \lor is ‘the smallest linear space containing the two spaces’ (entire plane in the case of two distinct lines). In this case 0 = \textrm{the zero vector space}, that is the origin, and 1 = L, that is the entire plane, so 0 is the smallest space you can get and correspondingly 1 is the largest one.

We have thus succeeded in the task of finding what is common for propositional logic and linear spaces, namely, they are both bounded lattices. What we should be asking next is whether the similarities end here. It turns out the answer is no, there is indeed one more axiom satisfied by both. However, before we go on with comparing propositional logic with linear spaces, it is useful to introduce the notion of a Boolean algebra. A bounded lattice L is called a Boolean algebra if it has an operation \neg and the following two axioms hold (I promise these are the last axioms to be introduced in this post).

(6)  x \land (y \lor z) = (x \land y) \lor (x \land z),    x \lor (y \land z) = (x \lor y) \land (x \lor z)

(7)  x \lor \neg x = 1,    x \land \neg x = 0

The first part of axiom (6) should look familiar, it is the distributive law discussed in most of the previous posts. Interestingly, the two parts of axiom (6) are equivalent, by assuming either one, the other can be derived using the first four axioms.3 Axiom (7) is also, more dramatically, known as ‘The Law of Excluded Middle’, first stating that either the proposition itself or its negation must be true; and that both cannot be simultaneously true.

The easiest example of a Boolean algebra is given by propositional logic, it is indeed distributive, and since each proposition is either true or false, (7) holds. A Boolean algebra doesn’t need to be limited to two truth values though, an example of this is given by ‘the set of all subsets’ of any set (called, again more dramatically, the power set). For example, consider the subsets of the three element set \{1, 2, 3\}, or to make it look less scary, let us call the elements knife, fork and spoon. Now imagine you are a student, consequently, your kitchen drawer only contains one knife, one fork and one spoon, the power set of this drawer is then all the combinations of cutlery you can possibly take out of the drawer. It turns out that there are eight possible combinations, as illustrated by the following graph.

diagram

The symbol \varnothing stands for the empty set, that is, nothing is taken out from the drawer. An arrow between two sets in the graph indicates that the lower set is contained in the upper one. These sets form a Boolean algebra with the following operations; \land is the intersection of two sets, that is, it chooses the elements that are contained in both sets, \lor is the union of two sets, that is, it contains all the elements that are in either of the two sets, and finally, \neg x is the complement of the set x, defined by taking away all the elements of x from {knife, fork, spoon}. The top and bottom elements are {knife, fork, spoon} and \varnothing, respectively. As we can see, the Boolean algebra of the kitchen drawer doesn’t have much to do with truth values, or rather, it has more truth values than just true and false; thus the sentence ‘I took the cutlery from the drawer’ can be partially true if, say, you only took the fork.

Similarly, we can turn the algebra of linear spaces into a Boolean algebra by introducing a restriction on the allowed subspaces. We have already seen that it is a bounded lattice, now define the operation \neg x = x^{\perp} as the orthogonal complement of x, that is, all those spaces perpendicular to x, in the plane this is just another line.

perp

If we now fix two orthogonal subspaces of a given linear space, and include the zero space, then they form a Boolean algebra under the operations as defined before. In two dimensions, this is just a choice of two orthogonal lines, and looks very much like the picture above.

So by requiring the subspaces to be orthogonal, the logic of linear spaces becomes a Boolean algebra, and consequently the distributive law holds. It therefore looks like the non-distributivity, as demonstrated in the previous post, arises because the lines are not orthogonal; and indeed, in the example we used the three lines were chosen so that they are not orthogonal. If we consider a three-dimensional space instead, and pick three lines which are all mutually orthogonal, we can easily see that the distributive law holds, take for example, the following picture.

3daxis

Now x \land (y \lor z) = (x \land y) \lor (x \land z), since intersection will always result in zero, no matter which lines or planes intersect.

In fact, the logic of linear spaces (without requirement of orthogonality) satisfies all the axioms except (6), distributivity. This answers the question about how close the linear spaces are to propositional logic, the answer is: ‘very close’, they only differ by one axiom. There is even a special name for a set which satisfies the axioms (1)-(5) and (7), but not necessarily (6), it is called an orthocomplemented lattice by analogy with orthogonal complements of linear spaces. This demonstrates the power of the abstract approach, the similarities between the two have hopefully become apparent.

To conclude, let me emphasize that in order to turn an algebra of linear spaces into a Boolean algebra (which we would like to do, since we have a good intuitive understanding for Boolean algebras) we had to make a choice of the orthogonal spaces. In general, there are infinitely many such choices. Very roughly speaking, each such choice corresponds to a set of quantum observables whose value can be known simultaneously with arbitrary precision. The internal logic of such set is therefore classical, as the observables form a Boolean algebra. Let us call such choice a Boolean frame; the interesting, and the important, question concerns the interactions of these frames, this is where logic ceases to be classical. One approach to this is to introduce a collection of all possible linear subspaces of a given space, together with some ‘choice maps’ corresponding to the Boolean frames. Now whenever there is a linear map between two Boolean frames, it induces a map between the ‘choice maps’, which hopefully tells us something about the relationship between the observables in different frames. More detailed discussion would unfortunately require us to first cover some topics from category theory, which could on its own be a subject for multiple series of blog posts.

This post concludes my exploration of quantum logic in the form of a blog, which was one part of my summer research project in 2017. It by no means concludes my fascination with the subject, neither my interest to learn more.


1We do not include implication, although it can be constructed using the other operations, namely, x \Rightarrow y \equiv \neg x \lor y.

2For propositional logic, this is pretty much its definition, for linear spaces one has to prove that each axiom holds.

3The proof of this is left for the interested and the inspired, or you can look it up, it is a standard result in lattice theory.

Advertisement

Pictures of lines

Thus far I have introduced the uncertainty principle and used it to demonstrate that quantum mechanics behaves quite oddly, to say the least, from the point of view of classical logic. What makes the uncertainty principle highly interesting for quantum theory is its almost complete independence of any physical details; it is indeed an inherent property of mathematics used to describe the quantum world, and is thus necessarily present in all physical systems. This, in fact, suggests that the uncertainty principle is a derivative property of something more fundamental and perhaps more elementary. Here we will leave the uncertainty principle behind and explore this idea by considering the geometry of the quantum state space; it turns out that a useful approach to shed some light on the logic of quantum mechanics is to draw lines.

Quantum states are represented by vectors in a vector space, or more precisely, by linear subspaces of the state space1. For the sake of visualisation, we will concentrate on a two-dimensional Euclidean space, which is a fancy name for the familiar coordinate plane. So in our case each state corresponds to a line in the plane passing through the origin (it could also be the case that the state is the origin or the entire plane). Here is an example of two states, let’s call them red and blue:

rb

We should immediately ask, what are the suitable logical operations for ‘and’ and ‘or’? Since the lines represent the physical states, we want to be able to say something like ‘The system is in the blue and red state’ as well as ‘The system is in the blue or red state’2. We will start with ‘and’ as it is perhaps more intuitive.

For the states represented by lines, ‘and’ is defined simply as the intersection of the lines3. Recall that ‘and’ is represented by the symbol \land. That is, to say that the system is in ‘blue and red’ state is to find where on our plane the state is simultaneously blue and red. Clearly, there are two possibilities here; firstly, if the lines happen to coincide (i.e. they were in fact the same line) the ‘and’ operation is redundant and the intersection is just the original line; in general, however, the lines will be different and will intersect only at the origin. In our example case this looks like:

rband

One could think that the definition of ‘or’ is as simple as that of ‘and’; it is tempting to define ‘or’ as the union of the two lines, the system is either in the blue state or in the red one. This naive definition is, however, implausible both mathematically and physically. Mathematically, the union of two lines is no longer a linear subspace of the plane; indeed, we agreed that each state is either a line, the origin or the entire plane, but the union of two lines is some kind of skewed cross. Thus this definition of ‘or’ does not preserve the structure we started with. Physically, the definition has even more catastrophic consequences; it implies that measuring the state should yield either ‘blue result’ or ‘red result’, depending on which state we happen to measure. We know, however, that this is not the case, instead the measurement will result in some linear combination (i.e. some mixture of both coordinates) of ‘blue’ and ‘red’, this is the so called ‘principle of superposition’. This principle, in fact, already suggests the appropriate definition of ‘or’.

‘Or’ in our linear representation is defined as ‘the smallest linear subspace of the plane containing both lines’. Recall that or is represented by the symbol \lor. Again, two things could happen, the first one being the redundant case when the two lines coincide and nothing happens, and for two distinct lines their ‘or’ is the entire plane. This definition overcomes both problems mentioned above; it guarantees that the resulting state is a linear subspace and it accounts for superposition. Thus for the lines we started with, the ‘or’ operation looks like:

rbor

Equipped with these logical operations, let us reconsider the distributive law once again. For that, let’s add a third line, called green, to our plane:

rgb

Recall that the classical distributive law asserts:

\textrm{green} \land (\textrm{blue} \lor \textrm{red}) = (\textrm{green} \land \textrm{blue}) \lor (\textrm{green} \land \textrm{red}) 4

Now, using the definitions of ‘and’ and ‘or’ as above, let us figure out visually how both sides of this equality look like. These will turn out to be quite different.

We start from the left-hand side: green and (blue or red). We already know what (blue or red) is from one of the pictures above, it is simply the entire plane. Thus we need the intersection of the green line with the plane, which is just the green line itself:

g

For the right-hand side, both terms in brackets are intersections of two distinct lines; (green and blue) and (green and red). Thus they are both equal to the origin, as before. Now the result is the smallest linear subspace containing the origin, hence just the origin itself:

empty

We have therefore managed to show that the distributive law does not hold for these definitions of ‘and’ and ‘or’. What is fascinating about this example is its similarity to the one with ‘quantum cyclist’ discussed last time, which required a complicated construction and the uncertainty principle. This visual approach demonstrates that non-distributivity of quantum logic is really a consequence of geometry of vector spaces.


1 mathematically state space is a Hilbert space

2 or the probabilities of these in the case of quantum physics

3 more precisely, intersection of the linear subspaces of the Hilbert space

4 For the intuition behind this, see the beginning of the previous post or the very end of the first post

 

Intermission

I am not working on the project during July, so the blog has been a bit silent for a while. However, I decided to post something less serious and hopefully more entertaining in the meantime.

***

This is a physics joke I’ve heard somewhere; I’m putting it here as it alludes to the uncertainty principle discussed in this blog; it is often called the ‘Heisenberg uncertainty principle’, as Werner Heisenberg was the first one to introduce the idea in the early 20th century.

Heisenberg, Schrödinger and Ohm are on a road trip. Heisenberg is driving and their speed is way above the speed limit when they’re stopped by police. The police says:

– Are you aware that you were driving at 140 kilometres an hour?
Heisenberg, being very annoyed and upset, responds:
– Thanks! Now I have no idea where I am!
The police finds the answer a bit weird and asks Heisenberg to open the trunk.
– Um, did you know you have a dead cat here? the police asks.
Schrödinger immediately jumps out of the car and shouts angrily:
Now we know, you idiot!
The police decides to arrest all of them for speeding, killing a cat and offending the police. Ohm resists.

***

Since I’ve been also writing about logic, here’s the only logic joke I know.

Three logicians walk into a bar. The bartender asks: “Are all of you getting a beer?” The first logician looks at the other two and says: “I don’t know.” The second one also says: “I don’t know.” Immediately after hearing this, the third logician exclaims: “Yes!”

***

Perhaps my favourite book as a child was Lewis Carroll’s Alice’s Adventures in Wonderland, whose humour is often based on playing with logic. Here’s one example from Chapter Seven, A Mad Tea-Party.

‘Then you should say what you mean,’ the March Hare went on.
‘I do,’ Alice hastily replied; ‘at least – at least I mean what I say – that’s the same thing, you know.’
‘Not the same thing a bit!’ said the Hatter. ‘You might just as well say that “I see what I eat” is the same thing as “I eat what I see”!’
‘You might just as well say,’ added the March Hare, ‘that “I like what I get” is the same thing as “I get what I like”!’
‘You might just as well say,’ added the Dormouse, who seemed to be talking in his sleep, ‘that “I breathe when I sleep” is the same thing as “I sleep when I breathe”!’
‘It is the same thing with you,’ said the Hatter – –

***

I will conclude with a classic Knight and Knave logic puzzle. (For much more see https://sites.google.com/site/newheiser/knightsandknaves.) The setup is the following; you are on an island populated by Knights, Knaves and Spies. Knights always tell the truth, Knaves always lie and Spies can either lie or tell the truth. Moreover, the inhabitants always hang out in groups of three, one Knave, one Knight and one Spy and their only pastime is to give you logic puzzles consisting of figuring out who they are. So, you meet the inhabitants A, B and C, and they say:

A: I am a Knight
B: I am a Knave
C: B is not a Knight

Which one is which?

Quantum physics against intuition – Part II

In the first post we discussed the fact that classical first-order logic is distributive, that is, pizza and (lemonade or water) is the same as (pizza and lemonade) or (pizza and water); or symbolically,

x \land (y \lor z) = (x \land y) \lor (x \land z) .

This time the aim will be to come up with an example demonstrating that this very intuitive identity does not always hold in quantum mechanics. To do that, we will need the uncertainty principle discussed in the previous post.

Quantum cyclist

We are going to use the uncertainty principle for position and momentum to construct a system which does not obey the distributive law. To make the numbers a bit simpler, we take \hbar=1, so the uncertainty relation looks like:

\Delta X \Delta P\geq \frac{1}{2} .

Recall the example with a cyclist from the first post, we observed that the cyclist being in some interval and having some velocity is the same as the cyclist being in the first half of the interval with the same velocity or the cyclist being in the second half of the interval with the same velocity. Now consider a (tiny!) quantum cyclist; for concreteness, suppose the cyclist is in the interval [0,1] and has the momentum in the interval [0,\frac{1}{2}]. For simplicity, we take the uncertainty to be the length of the interval1, so we are saying that the cyclist is equally likely to be anywhere between 0 and 1 and is equally likely to have any momentum between 0 and \frac{1}{2}. Hence we have \Delta X = 1 and \Delta P = \frac{1}{2}. Now let x, y and z be the following statements about our system (i.e. about the cyclist):
x = ‘cyclist has the momentum in [0,\frac{1}{2}]
y = ‘cyclist is in [0,\frac{1}{2}]
z = ‘cyclist is in [\frac{1}{2},1]‘.
The distributive law is:

x \land (y \lor z) = (x \land y) \lor (x \land z) .

Note that the left-hand side of this identity is precisely what we have described above; the cyclist is in [0,1] with momentum in [0,\frac{1}{2}]. We calculate \Delta X\Delta P = 1\cdot \frac{1}{2} = \frac{1}{2} , which satisfies the uncertainty condition, and so the system is physically possible. On the right-hand side, however, we have (x \land y), that is, the cyclist is in [0,\frac{1}{2}] with momentum in [0,\frac{1}{2}], giving both \Delta X and \Delta P as \frac{1}{2}. But this violates the uncertainty bound, since \Delta X\Delta P = \frac{1}{2}\cdot\frac{1}{2} = \frac{1}{4}, which is certainly smaller than \frac{1}{2}! Since (x \land z) gives the same uncertainties, we must conclude that both terms on the right-hand side are physically impossible, and thus false. This makes all of the right-hand side false; we must, therefore, conclude that this identity cannot hold in this case, as it equates a true statement about the physical system with a false one.

The example above raises many questions for classical logic. Must we conclude that its axioms and rules of inference don’t always hold? If yes, what would be the axioms, and how would they account for the fact that classical logic is distributive? If no, how do we account for the anomaly described above? It is not even clear if there should be one formal system of reasoning flawlessly applicable in all situations to all possible systems. No matter the answers to these questions, the example certainly opens up the space for development of a formal system correctly describing the logic of quantum mechanics.2


1This is actually not quite correct, e.g.  \Delta X should really be \frac{1}{\sqrt{12}}. We can, however, get the uncertainties we want by scaling the intervals accordingly, but this doesn’t really contribute to the understanding, and so we drop the scaling for clarity.

2For further reading, see https://plato.stanford.edu/entries/qt-quantlog/.

Quantum physics against intuition – Part I

In this post I will introduce the basic concepts of quantum mechanics, which we will need later to show that the quantum systems don’t always work according to the classical logic. The amount of concepts introduced may be a bit overwhelming, so it’s in fact enough to concentrate on the significance of the uncertainty principle discussed at the end of the post.1

Quantum states

In quantum mechanics, a physical system (e.g. a particle) is represented by a state, which is usually denoted by \phi or \psi. The state contains all the physical information about the system; what this means is best understood by analogy. We can think of a state as being equivalent to specifying the location of a car for any given time; from that we can obtain all the other physical properties, like velocity, which we get by calculating the difference in location between two instants in time.

Measurements and observables

The only way we can get information about a state is to measure it, which should be a fairly uncontroversial statement. Mathematically this is captured by acting on the state we want to measure by an operator, which is perhaps less intuitive. Continuing with the analogy of measuring the velocity of a car, the “velocity operator” for our car would calculate the difference in position of the car in some small time interval and then divide that difference by the time interval, while the “position operator” would simply read off the location of the car. Hence, with each measurable property (called observables), like position, velocity, momentum2 etc. we associate an operator, which is denoted by a capital letter.

An important property of quantum mechanics is that a measurement alters the state. That is, if we start with a state \phi, and first measure its position X, we end up with the state X\phi. Suppose we now want to know the momentum P of the state, measuring P will give the momentum of X\phi rather than that of \phi. This is very different from classical physics, where the order of measurements (ideally) doesn’t affect the measured values.

To make things even more complicated, the measurement is always of statistical nature. That is, X\phi doesn’t have a certain value which will be measured every time, instead, we can think that there is a whole range or a set of values associated with X\phi. The average of these values is called the expectation value of X, it is the statistical average obtained by measuring many identical states \phi and then averaging over the measured values.

Uncertainty

Because of the statistical nature of the measurements, there is a natural uncertainty associated with each observable, denoted by \Delta A for an observable A. The uncertainty tells us how the values of the observable are spread around the average; if \Delta A is small, there is almost no variation in the value of A, and we are very likely to measure the average value of the observable; on the other hand, if \Delta A is very large, the value of A could be almost anything, which amounts to the system having no information about that observable, as we could equally guess the value instead of measuring it. It is important to note that the uncertainty arises from the fact that an observable has a range of possible values, and is thus an inherent property of the theory, and consequently an inherent property of nature, provided that quantum mechanics is an accurate description of reality3. The quantum mechanical uncertainty therefore has nothing to do with experimental uncertainty or precision of our measurement devices.

It is possible that the measurement of A doesn’t affect the measurement of B, in which case A and B are said to commute, and they behave more or less like in classical physics. However, if two observables do not commute, there will be an uncertainty relation between them limiting the precision with which the system can have the properties represented by these observables. It so happens that position X and momentum P do not commute, which gives rise to the most famous uncertainty relation4:

\Delta X \Delta P\geq \frac{\hbar}{2}.

This is to be read: the uncertainty in position multiplied with the uncertainty in momentum is always larger than (or equal to) some constant; the actual value of the constant is more or less irrelevant, what is important that it is larger than zero. One way to understand what the uncertainty relation conveys is to consider the extreme cases; suppose the position of a particle is known with a great precision, this means \Delta X becomes very small, but the product \Delta X\Delta P must be greater than a constant no matter what, the only way this can be true is the uncertainty of momentum becoming very large, that is, the system loses all information about its momentum. Similarly, if momentum is known with a very high precision, the system loses all information about the location of the particle. What this means realistically is that there will always be some uncertainty in both position and momentum, and if more information is obtained about one of them, some information must be lost about the other.5

As a conclusion, I recommend this highly entertaining yet informative animation illustrating what kind of weird consequences all of this has.


1Although I promised not to introduce all the technical details, I couldn’t resist adding the mathematical derivation of the uncertainty principle as a separate document for those interested, though it may require some mathematical background.

2momentum is defined as mass times velocity

3All the experiments to date agree with the quantum mechanical predictions, indicating that the theory captures at least some features of nature correctly.

4An elegant way to derive the uncertainty principle using Fourier transforms can be found here. For an elementary derivation see the extra document.

5This simple inequality has far reaching implication is physics, for further reading see e.g. http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/UncertaintyPrinciple.htm.

A very brief introduction to logic

Imagine you are walking to a friend’s house. When you reach the right street, you realise you are not sure about the house number; you know that it is either 23 or 24, but can’t remember which one. You also remember that the house is on the left side of the road. Fortunately, you notice that the houses to the right of you all have even numbers, and the ones to the left correspondingly odd. Hence, provided that what you remember is correct, you have enough information to ring the right doorbell without having to guess. Let us break down the possible reasoning going on here:

(1) The friend’s house number is either 23 or 24.
(2) The friend’s house is on the left.
(3) All the houses on the left have an odd number.
(4) The friend’s house has an odd number. (By (3) and (2))
(5) 24 is not odd.
(6) The friend’s house is not 24. (By (4) and (5))
(7) The friend’s house is 23. (By (1) and (6))

What is remarkable about this reasoning is that if (1), (2), (3) and (5) are all true, then so must be (7), that is, the reasoning is truth-preserving. There is, of course, nothing special about houses and numbers, we could equally replace all the words by something which doesn’t even make much sense, for instance:

(1) My foot is either pink or ultramarine.
(2) My foot is stolen.
(3) All stolen feet are liked by the Holy Frog.
(4) My foot is liked by the Holy Frog. (By (3) and (2))
(5) No ultramarine foot is liked by the Holy Frog.
(6) My foot is not ultramarine. (By (4) and (5))
(7) My foot is pink. (By (1) and (6))

Formally, this is still a perfectly valid piece of reasoning. This, of course, by no means implies that the conclusion ‘My foot is pink’ is true; in this case at least (1) and (2) are certainly false, so the conclusion need not to be true. If we learn anything at all from this exercise, it is the following crucial observation; what makes the reasoning correct is its form rather that the content. This is the basic description of philosophical logic, it tries to capture the correct forms of reasoning, by correct we mean truth-preserving here. Formalisation of these rules for reasoning leads to the so called first-order logic.

While the field of mathematical logic is not limited to the first-order logic1, it is the most common one to consider in mathematics, philosophy and computer science. The reason for this is simple; first-order logic captures the reasoning we are familiar with in our everyday life. This kind of logic is intuitive and understandable for us; we are, in fact, as illustrated by the example above, constantly using this kind of inference rules without putting in much effort or paying attention to it. This is the reason why a person who starts learning programming doesn’t need to learn the ‘rules of logic’ first, they are more or less hard-wired in our interaction with the environment.

First-order logic consists of terms, sentential formulas, logical operators and quantifiers. In addition to these, one has to define a well-formed sentence and the inference rules2.  The terms can be thought of as elements of a set denoted by small letters a, b, c, ... ; a term is made into a sentential formula by specifying a property it has, that is, the set it belongs to, these are denoted by capital letters. For example, Hx means ‘x has the property H‘. The logical operators are \neg (not), \land (and), \lor (or) and \Rightarrow (implication). Finally, the quantifiers of first-order logic are \forall (for all) and \exists (there is). Using this formal language, we can now express symbolically the argument given in the example above:

(1) Ty \lor Fy
(2) Ly
(3) \forall x[Lx\Rightarrow Ox]
(4) Oy
(5) \forall x[Fx\Rightarrow \neg Ox]
(6) \neg Fy
(7) Ty

Where we have denoted: y = ‘the friend’s house’, T = ‘has number 23’, F = ‘has number 24’, L = ‘house is on the left’, O = ‘has an odd number’. Thus for example, (3) reads as ‘All houses on the left have an odd number’. To infer (4) from (3) and (2) we first use the inference rule \forall x P(x) \rightarrow P(y) to get Ly\Rightarrow Oy, which together with (2) and the inference rule A, A\Rightarrow B \rightarrow B gives (4). We can now explicitly see that it doesn’t matter what the letters above stand for, the given argument is true because we can justify each step by the inference rules of first-order logic.

The metaphysical status of first-order logic is an interesting question in philosophy, it can be summarised as ‘What makes logical truths true?’ The suggested answers to this include logical realism, asserting that logic is a property of the reality and the logical truths thus tell us something meaningful about the reality itself; and logical formalism, according to which logical truths do not as such provide any new information about how things are in the world, rather, it is their form which makes them necessarily true.3 While the latter view may sound appealing in the light of the previous example, as we are about to see, there is something about logic which seems to carry along some of our assumptions about the physical reality.

Since the inference rules of first-order logic are truth-preserving, if we start with a set of true statements about a physical system, everything we infer from this set of statements using those inference rules will also be true about the physical system in question. Or at least if this is not the case, we have a serious problem with either our physical understanding of the system, or with our logic. To illustrate this, consider the following example. We are told that a cyclist is somewhere on a road, and has a speed of 30 km/h. It is immediately obvious to us that this is equivalent to: ‘either the cyclist is on the first half of the road with the speed of 30 km/h, or the cyclist is on the second half of the road with the speed of 30 km/h’. This rephrasing is in fact so trivial that you could (rightfully) complain that this is pointless and I am just doing it to overcomplicate things. It does, however, illustrate a more general property of classical logic called distributivity. Using the formal notation defined above, distributivity can be expressed as:

\displaystyle x \land (y \lor z) = (x \land y) \lor (x \land z)

What makes this obvious property of classical logic extremely fascinating is that it no longer holds in general for quantum mechanical systems.


1One can indeed come up with an entire zoo of exotic logics in mathematics, the Wikipedia entry has a good overview of these, https://en.wikipedia.org/wiki/Mathematical_logic
2For a full list of properties defining first-order logic, see http://mathworld.wolfram.com/First-OrderLogic.html
3For further reading, see https://plato.stanford.edu/entries/logic-ontology/