do not generate fresh questions, nor do they furnish new
proofs. They generate instead standard answers to an already
established set of questions. In principle the art only furnishes
1,680 different ways of answering a single question whose answer is
already known. It cannot, in consequence, really be considered a
logical instrument at all.
Eco considers Llull’s treatment of the question Whether the
world is eternal and concludes:
At this point, everything depends on definitions, rules, and a
certain rhetorical legerdemain in interpreting the letters. Working
from the chamber BCDT (and assuming as a premise that goodness is so
great as to be eternal), Lull deduces that if the world were eternal,
it would also be eternally good, and, consequently, there would be no
evil. But, he remarks evil does exist in the
world as we know by experience. Consequently we must conclude that
the world is not eternal. This negative conclusion, however,
is not derived from the logical form of the quadruple (which has, in
effect, no real logical form at all), but is merely based on an
observation drawn from experience. The quotations are from Eco’s chapter on Llull, in the
subsection headed The alphabet and the four figures,
pp. 63 and 64 in the paperback edition.Eco’s account may not be completely fair to Llull’s logic. In
the Ars brevis, Llull addresses this question in
a way that does seem directly related to the method: Whether
the world is eternal. Go to the column BCD and maintain the negative.
In the compartment of BCTB you will find that if it were eternal there
would be many eternities differing in kind. These are concordant,
according to the compartment BCTC, but contradict each other,
according to the compartment BCTD, which is impossible. It therefore
follows that one must maintain the negative answer to this question,
and this is proved by rule B. [Translation mine, based
on a comparison of Bonner and Fidora.] Llull’s
argument does exhibit a facility that justifies Eco’s reference to
legerdemain, but it appears on the surface to be tied tightly to
the combinations derived from the triple BCD; Eco’s suggestion
that the conclusion has nothing to do with the method seems to
need much more by way of substantiation.
I’m not entirely certain that Lull’s method is quite
as vacuous as Eco suggests. I think it may be possible to view it as
a kind of heuristic. You want to solve a certain problem, you can
think about it in these ways, and the combinatorics will show you new
ways to think about it. Maybe there is an algorithm for providing all
possible combinations of ideas so that when you think about it, you
will say, Oh, wait, that one will help.Modern books on heuristics are not much different. If you read
Polya’s book How to Solve It
[], he will not tell you exactly how to solve
your problem. He gives you hints about ways to think about it that may
help. But you are responsible for recognizing that this one will help,
or at least trying it and seeing if it helps. Polya does not generate
fresh questions, nor furnish new proofs. He helps the reader find ways
to think about your problem which may enable the
reader to formulate relevant questions which put the
problem in a new light, and which may, if all goes well, lead to fresh
proofs. If we do not find Llull’s method as helpful as
Polya’s, it may merely be that we are not as interested in
theology as Llull was (or that we are more interested in geometry and
the other branches of mathematics Polya talks about).Leibniz had other predecessors. The first Secretary of the
Royal Society of Great Britian, John Wilkins, wrote an enormous book
called An Essay towards a Real Character and a
Philosophical Language [].
There’s that phrase, real
character again. Now, most people, if they have heard the
name John Wilkins at all, know the name from a short piece by Borges
in which Borges says Wilkins reminds him of a certain Chinese
encyclopedia []. (Some people have thought
that this Chinese encyclopedia is a real Chinese encyclopedia.
It’s not; Borges made it up.) In this encyclopedia, the
Celestial Emporium of Benevolent
Knowledge,
it is written that animals are divided into: (a) those that
belong to the Emperor, (b) embalmed ones, (c) those that are trained,
(d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs,
(h) those that are included in this classification, (i) those that
tremble as if they were mad, (j) innumerable ones, (k) those drawn
with a very fine camel’s-hair brush, (l) others, (m) those that
have just broken a flower vase, (n) those that resemble flies from a
distance.
This piece became famous in part because Michel Foucault
read it and laughed so hard that he decided to call the entire history
of western philosophy into question. Perhaps what we think looks as
ridiculous from outside as this classification looks to us.Borges tells us that he has never actually seen Wilkins’
book, because even the national library of Argentina lacked a copy.
We, on the other hand, can read Wilkins because the book has been
scanned as part of the Early English Books Online project. There are
scans available on the web, and it has been transcribed by the Text
Creation Partnership, so that there is even a TEI-encoded version
publicly available.When you read Wilkins, instead of just the parody of Wilkins in
Borges, I expect that many of you will have the same reaction I did,
which is Well, no, he’s not crazy at all.
Wilkins’s work reads like a very complicated spec that involves
a lot of serious work and a number of unavoidable compromises.
(Perhaps Wilkins should be regarded as the world’s first Working
Group editor. Except that he had, essentially, a Working Group of
one.) The experience of reading Wilkins is not unlike the experience
of reading, say, any proposal for a top-level ontology written by
people in artificial intelligence or in the semantic web. Actually, it
is slightly different: I feel more sympathetic towards Wilkins;
I’m not quite sure why.Ontologies in the sense of AI and the semantic web are also a
continuation of Leibniz’s concerns, a continuation that for all
of his hundreds of pages and hundreds of bibliography entries Umberto
Eco doesn’t talk about. But they show us that the notion of
perfect languages is alive and not dead after all. The idea of perfect
languages has, however, been split in two. People developing
ontologies don’t normally expect to make them into languages or
make them components of languages. They are there to enable reasoning,
but not necessarily to capture arbitrary utterances.The other branch of modern work that descends from
Leibniz’s concerns is, of course, further work on the
systemization and automatization of reasoning: logic. One of the
creators of modern logic, Gottlob Frege, explicitly identified his
goal as the creation of a language in the spirit of Leibniz []. Now, to my great astonishment, he did not regard
himself as creating what Leibniz called a calculus
ratiocinator, or thinking calculus. He thought he was
creating a universal character, and his belief is a source of
continuing puzzlement to me, both because Frege makes such a sharp
(and value-laden) distinction between the two, and because, if one
does want to make the distinction, Frege’s work looks very much
more like a thinking calculus (it is, after all, a system for logical
inference) than like a language or set of atomic ideas (since for all
non-logical concepts Frege has recourse to conventional mathemetical
notation). Perhaps Leibniz was not, after all, the last person
philosophers take seriously as a philosopher to try to build or want
us to build a perfect language; maybe that was Frege.Now, when they hear talk about identifying the atomic units of
human thought and defining things explicitly so that we can reason
about them, a lot of people get nervous, because surely that amounts
to an attempt to banish ambiguity and vagueness, and make everything
purely regular. And it might. But in fact, one of the great (and
occasionally surprising) things about modern logic is that it has far
more capacity (or at least tolerance) for vagueness and
underspecification than we sometimes give it credit for. At the heart
of this mystery is the fact the modern logic is developed without any
fixed vocabulary: it is, if you will, Leibniz’s
calculus ratiocinator without his
characteristica realis. The only thing modern
logicians say about vocabulary is that, yes, there are identifiers;
they mean whatever they mean — which is to say, they mean what
the person using them says they mean. The actual
interpretation, that is formally speaking the
mapping from identifiers to objects in the domain of discourse, is
completely outside of scope for formal logic. Half of the books on
formal logic I have on my shelf don’t actually talk about the
structure of an interpretation; they just say That’s out
of scope.Modern logic says, in effect, You have these ideas. You
can reason about them this way. What that means is you can
make them as vague or as underspecified as you need. So the kind of
ambiguity and vagueness that Yves Marcoux was talking about as being
essential parts of the formalization of his application domain [] — that’s consistent with modern
logic. It’s not actually a contradiction of Leibniz’s
goal. It is possible to have logic that follows, if you will accept
the metaphor, the cowpath of human thought rather than imposing a sort
of rectangular system of paved sidewalks.Another aspect may be worth mentioning. Many of the attempts at
perfect languages that Eco talks about really will work only if they
are universally successful. They depend crucially on the network
effect to have a reason for being. If everybody learns Esperanto, then
anybody can talk to anybody else in Esperanto, and we will never,
any of us, need to learn a third language. We’ll
have our native language, we’ll have Esperanto, and that will
suffice. And in the long run, anyone who has ever compared an N-to-N
translation problem with a translation into a interlingua and then
back out (which gives you a two times N translation problem) will know
it would be better — the overall cost to society would be much
lower — if everybody would learn an intermediate language. But
such a choice would require the same willingness to ignore the short
term in favor of the long term that Sam Hunting was talking about the
other day []. The long-term payback only
accrues if the entities involved have survived through the short term.
And so a lot of entities really want short-term return, and if
you’re given the choice between learning a language that would
be useful if and only if everybody else in the world learns it and
learning a language which is useful now because a lot of people in the
world already know it, then you will learn Chinese or English or
whatever the lingua franca is in your geographic region. Maybe you
will learn Esperanto for other reasons. But if you’re learning
it because you hope to use it as a universal language, you will
probably be disappointed for a few more centuries.Jean Paoli, who was one of the co-editors of the XML spec and
who performed the signal service of persuading the product groups
within Microsoft to support XML, had a very straightforward way of
saying this, which I call the Paoli Principle: If
someone invests five cents of effort in learning your technology, they
want a nickel’s return, and they want it in the first 15
minutes. If they do see an advantage within 15 minutes, then okay.
Maybe they will persevere with your new technology. If they
don’t see a return within 15 minutes, you’ve probably lost
them.Now, many of us have struggled with managers with 15-minute
attention spans, and many of us probably think the world would be a
better place if they had longer attention spans (say, at least 30
minutes). But people are the way they are, and if we want to persuade
them, we need to pick them up where they are rather than demanding
that they change.Another way in which what we do is sometimes different from what
Leibniz was talking about is that we have learned that vocabularies
are often a lot simpler when they do not attempt absolutely universal
coverage, so we get simplification efforts like the one Joe
Wicentowski was talking about the other day []. I think it is probably a common experience
within the room that really complicated schemas that attempt to
blanket an entire application domain tend to be really, really big and
really, really hard to learn, and to spawn simplification efforts left
and right. So, we often straddle this divide; we create those big
schemas, but then we also create partial schemas because partial
schemas are easier to understand, easier to use, and easier to teach.
And as long as they suffice for a particular sub-application area,
they’re extremely useful. We don’t place the burden of
supporting all of scholarship on every vocabulary that we write, only
on a few of them.Another difference, at least as of this conference. Some of us
will say, Wait, not everything needs to be explicit.
David Birnbaum taught us that sometimes things don’t have to be
explicit []. Even when
they’re clearly relevant, we may get by without
tagging them, without making them explicit. I still have to think
about that, because I’ve always thought the purpose of markup is
to make things explicit, and it does make things explicit. David has
now pointed out that it does not follow that we must use markup to
make explicit representations of everything we wish to think about. We
may be able to get by without such explicit representations, and if we
are worrying about return on investment, the resulting reduction of
effort may make all the difference.And we don’t normally actually reduce all of the concepts
in our vocabularies to atomic thoughts. Some of us think it would be
really interesting as an intellectual exercise, and possibly as a tool
in documentation, to say what atomic ideas go together to make up the
notion of chapter, say, but in practice the public
vocabularies we use don’t actually
define those atomic ideas. And they don’t need to. All we say is
we’re going to need chapters; we’re going to need
paragraphs. They have some things in common; they are
different in other ways.Mostly we are happy that we have been successful over the last
decades describing concepts like those purely in natural language,
without trying to identify their atomic constituents. Partly
that’s laziness — sorry, intelligent use of resources.
Partly, however, it’s that for whatever reason — possibly
because we actually are sitting in working groups, some of us —
we no longer share Leibniz’s faith that every time we analyze a
composite idea into its constituent atoms, we will get the same
result. Leibniz used the analogy of prime and composite numbers, and
some of you will remember that a proposition called the Prime Number
Theorem tells us that every number has a unique decomposition into
primes. Every time we factor the number 728, we will get the same
decomposition into primes, and there is only one such decomposition.
Can any of us believe that everybody who decomposes the concept of
chapter or section into its constituent parts will get the
same answer every time they do it? I don’t believe it, and our
practice tells me that none of us believe it. So, leaving some of
those things inexplicit is one of the ways we achieve agreement and
inter-communicability.The biggest difference, though, between what we do and what
Leibniz wanted to do is that the entire field of markup since before
ISO 8879 was a work item is founded on saying no to the
idea that we will have a single language. The Gencode Committee,
formed by the Graphics Communication Association in the late 1960s,
was chartered, as I understand it (I wasn’t there), to design a
set of generic codes that everyone could use. And I don’t know
how many meetings they had before they said, No. And,
like many a Working Group after them, they rebelled against their charter and
said, We’re not going to do that. We’re going to do
something better; we’re going to do something different.
The Gencode Committee escaped to the meta level. They said, We
will define a meta language that allows you to define the tags you
want. (Then we do not have to endure the hours of disagreement that
are necessary to reach agreement on whether to call it chapter or section or div.)So, maybe we’re not actually following in the footsteps of
Leibniz. We don’t seem to aim for languages that exhaustively
categorize the atomic units of our thought, or that absolutely anyone
can use without change for their own purposes. Sometimes we don’t even
aim for vocabularies that make explicit all of the features of our
texts, even the ones we think are relevant.And yet sometimes when we struggle long enough with a particular
problem in document analysis and modeling, we achieve solutions that
just feel right. And that is an exhilarating experience. That
exhilaration is a lot like the feeling offered by some poetry, which
is perhaps appropriate. When I took a course in the writing of poetry
as an undergraduate, the instructor told told us that, in her view,
poetry is calling things by their true names.When we design our systems — our languages and their
supporting software — some of what’s needed is technique,
and some of what’s needed is inspiration. From other
people’s work, we can improve our own technique, and from other
people’s examples, we can often draw inspiration. With luck,
Balisage this year has provided you both, with tips on technique and
inspiration for your own work. Thank you for coming.References
Borges, Jorge Luis.
The Analytical Language of John Wilkins,
tr. Ruth L. C. Simms.
In Borges: A Reader, ed. E. R. Monegal and
A. Reid.
New York: Dutton, 1981, pp. 141-143.
(Frequently reprinted.)
Birnbaum, David J., and Elise Thorsen. Markup and
meter: Using XML tools to teach a computer to think about
versification. Presented at Balisage: The Markup Conference
2015, Washington, DC, August 11 - 14, 2015. In Proceedings of Balisage: The Markup Conference
2015. Balisage Series on Markup Technologies, vol. 15
(2015). doi:10.4242/BalisageVol15.Birnbaum01.
Couturat, Louis.
La logique de Leibniz,
d’après des document inédits.
Paris: Felix Alcan, 1901.
On the Web in
Gallica: Bibliothèque numerique
and at
archive.org.
Durusau, Patrick, and Sam Hunting. Spreadsheets - 90+
million End User Programmers With No Comment Tracking or Version
Control. Presented at Balisage: The Markup Conference 2015,
Washington, DC, August 11 - 14, 2015. In Proceedings of Balisage: The Markup Conference
2015. Balisage Series on Markup Technologies, vol. 15
(2015). doi:10.4242/BalisageVol15.Durusau01.
Eco, Umberto.
La ricerca della lingua perfetta nella cultura europea.
Bari: Laterza, 1993.
English translation by James Fentress as
The search for the perfect language.
Oxford: Blackwell, 1995; paperback London: HarperCollins, 1997.
Frege, Gottlob.
Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens.
Halle: Louis Nebert, 1879.
Reprinted since by a variety of publishers.
On the Web in
Gallica: Bibliothèque numerique.
Leibniz, Gottfried Wilhelm.
Generales inquisitiones de analysi notionum et veritatum.
Allgemeine Untersuchungen über die Analyse der Begriffe und Wahrheiten.
Hsg., übers. und mit einem Kommentar versehen von
Franz Schupp.
Lateinish — Deutsch.
Hamburg: Felix Meiner, 1982.
Philosophische Bibliothek Band 338.
Leibniz, G. W.
New essays on human understanding.
Translated and edited by Peter Remnant
and Jonathan Bennett.
Abridged Edition.
Cambridge: CUP, 1982.
[Leibniz, Gottfried Wilhelm.]
Leibniz.
Ausgewählt und vorgestellt von
Thomas Leinkauf.
München: Diederichs, 1996.
Leibniz, Gottfried Wilhelm.
Die Grundlagen des logischen Kalküls.
Hsg., übers. und mit einem Kommentar versehen von
Franz Schupp,
unter der Mitarbeit von Stephanie Weber.
Lateinish — Deutsch.
Hamburg: Felix Meiner, 2000.
Philosophische Bibliothek Band 525.
[Llull, Ramon].
Ars brevis.
In
Doctor Illuminatus: A Ramon Llull reader.
Ed. and tr. by
Anthony Bonner.
Princeton: Princeton University Press, 1993,
pp. 289-364.
Lullus, Raimundus.
Ars brevis.
Übers., mit einer Einführung hsg. von
Alexander Fidora.
Lateinisch — deutsch.
Hamburg: Felix Meiner, 1999.
Philosophische Bibliothek Band 518.
Marcoux, Yves.
Applying intertextual semantics to Cyberjustice: Many reality
checks for the price of one. Presented at Balisage: The Markup
Conference 2015, Washington, DC, August 11 - 14, 2015. In Proceedings of Balisage: The Markup Conference
2015. Balisage Series on Markup Technologies, vol. 15
(2015). doi:10.4242/BalisageVol15.Marcoux01.
Peano, Ioseph.
Arithmetics principia nova methodo exposita.
Romae, Florentiae: Bocca, 1889.
Polya, G.
How to solve it: A new aspect of mathematical method.
Princeton: Princeton University Press, 1945; second
edition Garden City, NY: Doubleday Anchor Books, 1957.
Quin, Liam R. E.
Diagramming XML: Exploring Concepts, Constraints and
Affordances. Presented at Balisage: The Markup Conference
2015, Washington, DC, August 11 - 14, 2015. In Proceedings of Balisage: The Markup Conference
2015. Balisage Series on Markup Technologies, vol. 15
(2015). doi:10.4242/BalisageVol15.Quin01.
Russell, Bertrand.
A critical exposition of the philosophy of Leibniz,
with an appendix of leading passages.
London: George Allen & Unwin, 1900; new edition 1937, rpt.
several times since.
Sampson, Geoffrey.
Chapter 2, Theoretical preliminaries in his
Writing systems: a linguistic introduction.
Stanford, California: Stanford University Press, 1985,
pp. 26-45.Usdin, B. Tommie. The art of the elevator pitch. Presented
at Balisage: The Markup Conference 2015, Washington, DC, August 11 -
14, 2015. In Proceedings of Balisage: The Markup
Conference 2015. Balisage Series on Markup Technologies,
vol. 15 (2015). doi:10.4242/BalisageVol15.Usdin01.Walmsley,
Priscilla. Comparing and diffing XML schemas. Presented
at Balisage: The Markup Conference 2015, Washington, DC, August 11 -
14, 2015. In Proceedings of Balisage: The Markup
Conference 2015. Balisage Series on Markup Technologies,
vol. 15 (2015). doi:10.4242/BalisageVol15.Walmsley01.Wicentowski, Joseph C., and Wolfgang Meier. Publishing
TEI documents with TEI Simple: A case study at the U.S. Department of
State’s Office of the Historian. Presented at Balisage: The
Markup Conference 2015, Washington, DC, August 11 - 14, 2015. In
Proceedings of Balisage: The Markup Conference
2015. Balisage Series on Markup Technologies, vol. 15
(2015). doi:10.4242/BalisageVol15.Wicentowski01.
Wilkins, John.
An essay towards a real character,
and a philosophical language.
London: Printed for Sa. Gellibrand, and for John Martyn, 1668.
Scanned pages are available from multiple sources in the Web, including:
Early English Books Online,
Bayerische Staatsbibliothek,
Google Books, and
second copy (Munich) at Google Books.
The TEI encoding made by the EEBO Text Creation Partnership mentioned in the text
is at
the EEBO TCP site.