# 2.5. Computing presented …

## Contents

# 2.5. Computing presented …#

## 2.5.1. Monoidal closed preorders#

An equation equivalent to (2.80) is given at the start of Heyting algebra. The reasoning behind the choice of ⊸ (named multimap in the unicode standard, or `\multimap`

in latex) for Heyting implication is mentioned in Heyting algebra § Category-theoretic definition. Per Heyting algebra § Formal definition, this symbol could perhaps more meaningfully be referred to as the Relative pseudocomplement.

Prerequisite to Closed monoidal category. Later, with the background of Chp. 3, read Closed category for a better explanation of how categories of this type enrich themselves. It looks like the author is defining the word “hom-element” as the hom-set for the category; perhaps because preorders have at most one element in their hom-set we can get away with this. See also the paragraph under Equation (7.10).

See also Brouwer–Heyting–Kolmogorov interpretation. As indirectly made more explicit in bhk-interpretation.pdf, the notation 〈0,a〉 and 〈1,a〉 are being used here in the sense of a disjoint union (categorical sum) and the meet corresponds to a categorical sum.

*Exercise* 2.82.#

For `1.`

let’s define:

Given \(x_1 \leq y_1\) where \(x_1, y_1 \in \mathcal{V}\), we must show that:

If we set:

Then we have by reflexivity:

And can use (a) in Definition 2.2. to conclude:

For `2.`

let’s define:

By reflexivity:

Using the fact that \(\mathcal{V}\) is closed we have:

Which is what we intended to show. See nearly the same answer in part (c) of Proposition 2.87.

For `3.`

let’s define:

Given \(x \leq y\) where \(x, y \in \mathcal{V}\), we must show that:

Starting with:

Including the result from `2.`

with \(x\) replacing \(w\):

Then using the fact that \(\mathcal{V}\) is closed, we have what we intended to show:

For `4.`

, notice that in `2.`

and `3.`

we only showed the reverse; that being monoidal closed
implies that (v ⊸ -) is a monotone map. A quick review of Definition 1.95 makes it clear that this
means (v ⊸ -) is a right adjoint however, and this definition has the same structure as Definition
2.79.

Reveal 7S answer

*Example* 2.83.#

We start with:

Subtracting \(x\) from both sides:

Applying \(max(0,-)\) to both sides:

How is \(v ⊸ w\) an example of a “single-use v-to-w converter”? If you have a \(v\), you can use the monoidal product (+) to get a \(w\) because \(v + max(0, w - v) ≤ w\).

*Exercise* 2.84.#

Note \(w∨ ¬v ⇔ v ⇒ w\).

How is \(v ⊸ w\) an example of a “single-use v-to-w converter”? If you have a \(v\), you can use the monoidal product (∧) to get a \(w\) because \(v ∧ (w∨ ¬v) = v ∧ w ≤ w\).

Reveal 7S answer

*Remark* 2.89.#

Some examples of categories enriched in themselves (i.e. closed):

**Bool**-category**Bool****Cost**-category**Cost****Set**-category**Set**

See also Enriched category; any Closed monoidal category is enriched in itself.

## 2.5.2. Quantales#

### Definition 2.90.#

See also Infimum and supremum.

Prerequisite to Quantale. A unital commutative quantale is a symmetric monoidal closed preorder that has all joins. A unital quantale is a monoidal closed preorder that has all joins. This corresponds to the following alternative definition from the Wikipedia article:

A unital quantale may be defined equivalently as a monoid in the category Sup of complete join semi-lattices.

The primary definition used by the Wikipedia article (based on a distributive property) is discussed in Proposition 2.98 below.

If we strip the term “unital” from quantale we wouldn’t be able to call the preorder “monoidal” anymore because that would require unitality. A monoid without an identity element is a semigroup; the term semigroup isn’t used too often because it’s easier to say a set that has nothing but a closed (see Magma (algebra)) associative binary operation. The Wikipedia article includes this definition by saying they are sometimes called *complete residuated semigroups*.

At a high level the term “quantale” brings together order theory and basic algebraic structures.

The concept of a quantale is closely tied to binary relations. See Relation algebra and Algebraic logic / Calculus of relations and this comment.

*Example* 2.91.#

*Exercise* 2.92.#

1a. False

1b. ∞

2a. The usual logical or

2b. The minimum of the numbers (using the usual order of the reals)

Reveal 7S answer

*Exercise* 2.93.#

We already defined the join for the empty set. For the singleton sets {T} and {F} the join is simply the same element of the set. For the set {T,F} the join is T.

Reveal 7S answer

*Exercise* 2.94.#

Yes; the join of the empty set will be the empty set (the least element). The join of any singleton
set (e.g. \(\{\{T\}\}\) or \(\{\{F\}\}\) in **Bool**) will be the same element. All sets of subsets will
have a join that is the union of the included subsets.

Reveal 7S answer

*Remark* 2.95#

*Proposition 2.96*#

This proof is tricky; see Joins and Meets in Preorder - Mathematics Stack Exchange for a question specific to it and this text. To “have all joins” doesn’t mean (as in the definition of a Semilattice) that a join exists for *any nonempty finite subset*. That is, a semilattice has all nonempty joins. A preorder that has all joins has all joins in the sense of Definition 2.90 here; a join exists even for the empty set. A preorder has all joins if it has a bottom element, which can obviously always serve as the meet of any subset that would otherwise not have one. The author intends to restrict quantales to being based on a preorder that is a Complete lattice.

## 2.5.3. Matrix multiplication in a quantale#

Why is it necessary to be closed and have all joins to support matrix multiplication? The general definition of matrix multiplication uses joins, i.e. they seem to be required by how we choose to define our general form form of matrix multiplication. It’s likely that being closed is necessary so that we have the distributive law as in Eq. (2.88).

See also:

*Exercise* 2.103.#

Reveal 7S answer

*Exercise* 2.104.#

For `1.`

we have:

Using Definition 2.2. part (a), any \(0 \otimes x\) term is always \(0\), so this reduces to:

For `2.`

we have:

Using symmetry, part (b) of Proposition 2.87, and symmetry again:

Pulling out \(M(w,x)\) with part (b) of Proposition 2.87 again, we have:

Reveal 7S answer

*Exercise* 2.105.#

```
import numpy as np
inf = float("inf")
M1 = np.array([[0, inf, 3, inf], [2, 0 , inf, 5], [inf, 3, 0 , inf], [inf, inf, 6, 0]])
# See https://stackoverflow.com/a/55986817/622049
M2 = np.min(M1[:,:,None] + M1[None,:,:], axis=1)
M3 = np.min(M2[:,:,None] + M1[None,:,:], axis=1)
M4 = np.min(M2[:,:,None] + M2[None,:,:], axis=1)
M1, M2, M3, M4
```

```
(array([[ 0., inf, 3., inf],
[ 2., 0., inf, 5.],
[inf, 3., 0., inf],
[inf, inf, 6., 0.]]),
array([[ 0., 6., 3., inf],
[ 2., 0., 5., 5.],
[ 5., 3., 0., 8.],
[inf, 9., 6., 0.]]),
array([[ 0., 6., 3., 11.],
[ 2., 0., 5., 5.],
[ 5., 3., 0., 8.],
[11., 9., 6., 0.]]),
array([[ 0., 6., 3., 11.],
[ 2., 0., 5., 5.],
[ 5., 3., 0., 8.],
[11., 9., 6., 0.]]))
```

Reveal 7S answer