One of my friends complained that I didn’t put any interesting images on my blog. So I promised him I’d post something with an image on it. I thought this photo would bring a smile to his face.
Gotta love powers of two.
The thing that struck me about the Turing Test, when I heard about it, was the crazy thought: how do I know that other people are anything like me? If we can’t even judge faithfully between ourselves, how can we judge between humans and computers? For I can do nothing but observe reactions, the effect of my actions on others. I can’t read people’s minds: they could, for all I know, run entirely different mental software. Everyone is, in some sense, a black box.
The act of empathy, then, is a little presumptuous and magical: I trust that we can share an understanding with each other, with the assumption that because I share a similar hardware architecture with you, that there’s a very strong chance that I share the same kinds of thoughts and values too. I think that’s why we put so much emphasis on physical bodies. If someone looks different, I may fear them: they don’t look like me, and so they might be thinking in an entirely different way. We all know the dilemma of the Turing Test deep in our bones, and we cheat a little (or a lot) by using irrelevant physical similarities in our judgements of one another.
I suppose I could be silly and wry and conclude that this is why women are mysterious and terrifying to me.
But I think that there’s something there that deserves more serious thinking.
People are often particular about naming. Names still hold special power: we may rationalize that names don’t matter, but there’s something hardcoded in human beings about choosing the right name for a thing. It’s a hard thing to fight.
Programmers are people too, and programmers also keep concerns (sometimes too much) about the right name for a thing. As a concrete example, the word lambda scares off a few people since it’s a foreign word. We might like to be brief and be able to say:
(def (make-adder x)
(fn (y) (+ x y)))
where def and fn are really meant to behave like define and lambda, respectively. One nice thing about Scheme is that it’s plastic enough to absorb a trivial change like this. Here’s a small library for aliasing identifiers:
(module alias mzscheme
;; Practice writing macros that themselves generate macros.
;; (alias old-keyword new-keyword) SYNTAX
;; Creates a new syntax for new-keyword; any occurrence of new-keyword
;; is replaced with the old-keyword.
(define-syntax (alias stx-1)
(syntax-case stx-1 ()
[(_1 orig new)
(define-syntax (new stx-2)
(syntax-case stx-2 ()
[(_2 e (... ...))
(orig e (... ...)))])))])))
Once we have a tool like this, then we can test it out:
(module test-alias mzscheme
(alias lambda fn)
(alias define def)
(def (make-adder x)
(fn (y) (+ x y))))
That being said, the above example is a silly thing to do. Even in competent hands, this is easy to abuse; in the wrong hands, one can end up with something that belongs to an obfuscation contest.
> (alias alias a)
> (a define d)
> (a if i)
> (d (f x) (i (= x 0) 1 (* x (f (- x 1)))))
> (f 3)
Communication between people means that we “come to terms”, and the inherent abuse here is to turn the language into something that no one else can read. In PLT Scheme’s case, the above is especially bad because there’s already an established mechanism for controlled renaming of identifiers as part of PLT Scheme’s module system.
(module test-alias mzscheme
(require (rename mzscheme fn lambda)
(rename mzscheme def define))
(def (make-adder x) (fn (y) (+ x y))))
Still, I thought the macro definition was cute (even if it’s silly), and it’s one of the more direct applications of a “macro-generating macro” that I’ve seen. But maybe this is all just syntax, futzing with words, distracting us from the real work on nailing down the semantics of our thoughts.
I didn’t get this so strongly before, but there’s a correspondence between the language and structures of first-order logic, and the signatures and units of PLT Scheme (as well as the signatures and structures of ML). I guess the analogy is something like: “language::signature as structure::unit.”
As a concrete example, a first-order language might just have equality (=), and a 2-arity P relation. Once we have a language, we can start writing statements in that language. Here are two statements using the language:
These two sentences, taken together, are trying to say “P must be a relation that represents a total function”. These sentences don’t have a true or false value without being evaluated in the context of some structure. A structure tells us what domain of thingies (the universe) we’re iterating over when we say “for all” or “there exists”, and also provides a concrete definition for the relations of the language.
There are some structures that might satisfy this, and other structures that don’t. Structures that satisfy the sentences we care about are called “models”. One example of such a model for those two sentences might be the natural numbers (0, 1, 2, …), where the interpretation of P is:
P(x, y) if and only if y = square(x)
But an example of a similar structure that wouldn’t fit the above conditions would be natural numbers under a P that’s not a function relation, like:
P(x, y) if and only if x < y
and so we’d say that this structure isn’t a model of those sentences.
Programmers have an intuitive idea about this: it’s similar to the distinction between “interface” and “implementation”: the language provides the interface, and the structures provide implementations for that language. And when we have an interface, we usually have some implicit idea of how that interface should behave; if we have an interface called Stack, we have an expectation about what it should do. We can make those assumptions explicit by writing contracts or predicates with that interface: the things that satisfy our expectations are good implementations.
Again, to make this concrete, when we’re programming with signatures and units, we first have to specify the things we’re going to be fiddling with. For example, a signature like:
;; A function^ model consists of a universe of elements,
;; and a 2-place predicate P between elements.
(define-signature function^ (universe
gives us a language for talking and using units that implement this signature. For example, to build a unit where P is the squaring relation, we can do this:
(define universe (list 0 1 2 3 4 '>4))
(define (P x y)
[(equal? x '>4)
(equal? y '>4)]
[(> (* x x) 4)
(equal? y '>4)]
(equal? y (* x x))])))
(I’m defining the P function a bit peculiarly since I want the universe of elements to be finite.)
But again, not all implementations of the function^ signature will do the right thing. Here’s a structure that doesn’t satisfy the intent of the function^ signature.
(define universe '(1 2 3))
(define (P x y)
(< x y)))
How do we capture intent? We want to be able to catch implementations that don’t do what we expect, and one way is to write code to exercise the units. Here’s one example:
;; function-model?: function@ -> boolean
;; Checks to see if the given unit satisfies what we
;; expect out of functions, that their domain is total
;; and that each element of the domain maps to
;; just one element of the range.
;; In first-order logic:
;; (for all x, y, z: P(x, y) and P(x, z) implies y = z)
;; (for all x: there exists y: P(x, y))
(define (function-model? model@)
(local ((define-unit-binding a-model@
model@ (import) (export function^))
(and (for-all* (lambda (x y z)
(implies? (and (P x y) (P x z))
(equal? y z)))
universe universe universe)
(for-all* (lambda (x)
(exists? (lambda (y) (P x y))
(link a-model@ function-tester@)))))
(There are a few definitions I’ve left out here; see logic-practice.ss for the other definitions.)
Once we have this, we can now check to see if squaring@ and less-than@ satisfy the function-model? predicate.
> (function-model? squaring@)
> (function-model? less-than@)
It finally feels like spring has arrived here in Worcester. The flowers are blooming from the trees, in great white poofy and pink blossoms. After my first winter here on the East Coast, it feels glorious to stand outside and just soak in the sun.
This semester is finally over. I still feel a little unsatisifed with my performance in Dan‘s logic class; the simplest things trip me up. Ever time I do a logic session, I come out feeling mentally pummeled. Brain squish just like grape! But not because the material’s necessarily difficult, but just that I come in with so much mental confusion.
For example, I got confused with the concepts of Skolem Hull and the Herbrand universe, which are different concepts. The Skolem Hull submodel only contains points that are interpretations of all the nameable elements of a language. A Herbrand universe, on the other hand, is a set containing only the nameable terms themselves. A hash collision.
We talked about models that include “junk” (elements that can’t be named by terms) and “confusion” (named terms that go to the same element). It’s funny, though, how the terms of logic are adopted by other subcultures, things like “compactness” and the “no junk, no confusion” properties, and in ways that are totally different from the definitions in mathematical logic.
I should probably spend a few more days in the sun and try to weed out the junk and confusion in myself, the ill-formed and redundant thoughts.