Introduction

In her opening to this year’s conference, Tommie Usdin talked about the central concerns of the community that we like to think is represented at this conference and how the conference reflects them. In particular, she stressed the centrality of the concept of descriptive or generic markup. We’re a conference about markup, and in particular, about generic markup, not a conference about XML or the XML technologies. [Usdin 2018] We are, of course, interested in both, but we’re interested in XML and its technologies as a way of working with generic markup and not vice versa.[1]

I’d like to continue Tommie’s line of thought a little and consider what it means that we have something that we think of as a community centered around the use of descriptive or generic markup and how we came to have such a community in the first place. If you consider the software ecosystem that grew up around SGML and later around XML and the community that has grown up in and around those ecosystems, I think you will agree on reflection that neither the existance of the ecosystem or the existance of the community is at all inevitable. We use generic XML tools on top of which most of us build our applications with XSLT and CSS stylesheets and XQuery queries and so on. It is as if the programming language community had developed not a market for parsing techniques and algorithms and parser generators, but a market for programming language parsers which were sold as separate products from the actual interpreters or compilers which were built on top of them and which could switch from one off-the-shelf parser to another. I think you’ll agree that a world in which that had happened in computer science would be rather different from ours and that many of us would find it rather odd.

With that example in mind, who would have said before the fact that the recognition of structure expressed in markup, and the validation of that structure against a generic definition roughly equivalent to a context-free grammar, would together be a substantial enough contribution to a work flow to enable a market for products to perform those tasks? I know I didn’t see that coming.

Note also that this entire ecosystem seems to be an invention of the SGML community — and possibly also the condition and cause of the existance of that community. There may have been people in the 1970s and 1980s, inside or outside ISO/IEC JTC1/SC18/WG 8, whose vision of the future was more or less what happened, but it’s not in the spec. Now, granted, ISO 8879 is notoriously vague on things like that, but in particular, it seems pretty clear, reading ISO 8879, that it is designed to be compatible with a world in which IBM’s DCF GML and the academic version Waterloo GML and even DCF Script, and possibly even TeX and troff (with suitable modifications), could be given retroactive SGML Declarations that made them conforming applications of SGML with a built-in vocabulary where no generic SGML tools at all would be involved at any point in the processing flow. The Amsterdam SGML parser – one of the first SGML parsers widely available – was not, in fact, a parser, but a parser generator. You gave it a DTD, and it produced a parser for documents in that vocabulary. Presumably (I never actually got it to run, so I have never actually used it, but I did check the documentation) the parser it generates is suitable for adding your own semantic actions to perform the actual processing desired for the data.

As these observations suggest, the SGML community, like a lot of communities, faced a problem of binding time. Is the grammar for a vocabulary baked into the parser, or is there a generic parser that reads the grammar just before parse time? Early binding, late binding? Generic tools for processing data, or generic tools for generating programs to process data? Compiler or interpreter?

Now, we know that there are technical reasons for preferring one or the other, and there are technical consequences of either choice. But I confess it takes me a little bit by surprise to see that there could be such profound social consequences of what I would have thought was basically a technical choice. It does not seem to me at all likely that a community of interests of the form we know would have grown up around descriptive markup, had the technical development of SGML software taken a different path.

What is a community?

That, in turn, leads me to wonder What exactly do we mean by ‘community’? How do communities form? I’d like to distinguish three clusters of reasons among the reasons that communities form.

First and most obvious, communities can form in the service of common, material interests. We should all be familiar with this phenomenon because it’s a major driver of standardization work.

A few years ago Eduardo Gutentag, then of Sun Microsystems, gave a memorable talk about IPR policy, in which in passing he explained the arithmetic of standards participation as seen from a purely commercial point of view [Gutentag 2006]. Any company – any commercial entity – considering participation in a standards effort faces, he said, what is arithmetically a very simple calculation: If we write our own product without any reference to anyone else or any standards, how much is it going to cost? How many are we going to sell at what price? Subtract gross from costs, calculate net, calculate return on investment. It’s numbers.

If we participate in an effort to develop a common standard, how much is that going to cost? How will that effect our development costs? How big will the overall market for standards-conformant products be, and what will our market share be? Do some more multiplication, division, subtraction, and addition – calculate profit there. Two numbers: Which is bigger? Which do we want to do?

It’s not unusual, if you think about it in that way – or it’s not unpredictable – that non-dominant players in a marketplace will frequently find it to be in their commercial interests to participate in development of a standard because standardization helps to grow the market. And if you have some confidence in your ability to maintain market share, that means you will be making more money. It is similarly easy to understand – and probably for some people of nimble brain to predict – that organizations in a massively dominant position in the marketplace will find that that arithmetic comes out the other way for them. Growing the marketplace is not going to make that much a difference to their bottom line. And the difference that it is likely to make is more likely to be negative.

So, it’s not an anti-social sentiment that causes a dominant player, like IBM in the 1980s or other organizations since, not to play nicely in standards organizations. It’s not to their commercial interest. (Except sometimes it is, and then they’ll play nicely.)

The development of SGML itself reflects a sort of unusual alliance among customers of typesetters and typesetting vendors. There’s almost always a sort of uneasy relationship between customers and vendors. And what those two groups wanted, as I understand it, was this – I wasn’t there so this is history as over-simplified from my memory of what I was told, and you should probably take it with a grain of salt. But, as I understand the story, the typesetting customers, the publishers – among others – wanted control of their data back, and they wanted to be able to switch vendors. They wanted more competition among vendors so that they could play them off against each other. And the vendors – all except a huge printing house who shall remain nameless and who dominated the marketplace – the typesetters all wanted better interoperability because they wanted to steal customers from other typesetters, in particular, from Huge Nameless Corporation.

Now, not everybody is going to get what they are hoping for in that situation, because it is extremely unlikely that every typesetting company is going to steal enough customers from the other typesetters to make up their own losses and come out ahead. Somebody is miscalculating there. But enough people thought it was in their interest to have a typesetter-independent way of driving the typesetting machine to want to develop generic coding. And ultimately they did.

Of course, it’s also very easy to get the numbers wrong. The arithmetic is easy, but getting the right numbers is not at all easy, and it’s not hard to find persuasive stories that involve corporations that cancelled their Unix product because it didn’t have enough sales to justify the effort they put into it, without the people who made that decision realizing that the existence of the Unix product was a material support to the sales of the Windows and Mac versions, because the existance of the Unix product neutralized potential objections from the systems administrators in a potential client company. So the Unix product was paying for itself – more than paying for itself – but the bookkeeping of the company, the accounting system of the company, didn’t make that visible, because accounting systems are not always very good at understanding the psychology of customers.

It’s interesting to notice, however, that at a lot of levels, in particular the level of individuals, commercial interests are often secondary to other factors. In many standards groups, I have had the experience of watching members of a group leave one company and go to work for another. Often they are in related fields, but often it’s a very different kind of company, and it’s been interesting to me to notice that what last month was in the crucial technical interest of this company is now in the crucial technical interest of that company that has actually a very different commercial position. There were, of course, other members of the Working Group who, when they changed companies, changed their technical tune because they spent their time talking to other people in the company, and so Joe who last month was arguing vehemently against this feature is now its most fervent supporter because Joe’s paycheck now has a different name on it. And in other standards groups you’ll notice an interesting phenomenon – I’ve been told that when the semiconductor industry was developing an SGML vocabulary, both in meetings and at lunch you’d have clusters forming of all the engineers who worked on embedded devices sitting together and arguing for the same things, and all the people who worked on CPUs or memory chips sitting together and arguing against the other people, even though the same companies were represented pairwise in both groups – because the commonality of interests of devising semiconductor devices for embedded systems created a tighter bond, at least for lunch and within Working Group meetings, than the mutual commercial interest of working at the same company as the people from your company working in a different division.

Academics will remember the notion introduced, I guess, 50-60 years ago by sociologists of science, who talked about the invisible college. They noticed that if you ask a typical academic who their closest colleagues were, they almost never mention anyone in the same institution; they mention people in the same discipline and, in particular, the same sub-field in other institutions. So, in some ways, it appears that there are stronger community-forming forces, in what you might call soft factors, than in material factors. Shared intellectual challenges, shared situations, shared approaches to those challenges – these can be more important for a sense of community than pure material interests, and they shade more or less imperceptibly over into better mutual communication – it’s easy to observe in social life and political life that common languages are crucial in forming communities. We see that both at the level of nation states in Europe and at the level of regional formations within nation states. People who speak the same dialect feel more commonality than people who speak different dialects of the same national language. That, in turn, shades over into shared values, feelings of empathy more generally, and an emotional feeling of belonging which is precisely what the German sociologist Ferdinand Tönnies identified as the crucial factor in formation of what he called Gemeinschaft, which I will render as community, and which he opposed to Gesellschaft, society, a more abstract, less personal relationship with a collective whole, which is more frequently based on material interests.

Now, the kinds of communities Tönnies was talking about are villages and so forth, and the kind of community I’m talking about is really a sort of emotional and intellectual sub-community within what he was calling Gesellschaft, so I’m not suggesting his analysis applies directly, but the notion that a community is formed by empathy and by a feeling of belonging certainly agrees with my experience of working in SGML and XML. I come back to these conferences because I feel like I belong here; I feel like there are people like me here.

So, communities form around languages. They can form around shared activities; they can form around shared interests and challenges, shared values, shared enemies, shared threats, and shared history. At some level, we have a community around generic markup because a man named Yuri Rubinsky made it his mission, among many other missions, to create such a community or to persuade people that we had one. Those who came to descriptive markup after 1995 or so never had a chance to meet Yuri Rubinsky, so I should mention explicitly that he was a great salesman. He could persuade a lot of people about a lot of things. But he was not a snake-oil salesman. The reason he managed to persuade people that we could form a community or we did form a community was that he correctly identified that we had commonalities that could suffice to make a community, and he cultivated common work and common activities that would help bind that community.

What binds this community together?

That leads me to the question: Well, what are our commonalities? What are our common interests, our shared challenges? I’m not going to try to make an exhaustive list; I don’t think I could. But there does seem to be a common interest in data longevity that can be motivated in a purely commercial sense. The longer we can keep the data around and useable, the more money we can make off it, the better our return on investment, and the better our bottom line. It can be culturally motivated: I want this to stick around because it’s Beowulf. It’s got to stick around.

We’re interested in application and data reuse. We can have motivations for that interest that vary a great deal. They can be purely commercial, purely selfish. They can be idealistic. They can be born out of a sense of responsibility. And it follows from those interests, in a chain of logic that I won’t try to explain in detail here, but it turns out to follow from those interests that we are necessarily interested in better representation of our information: better data formats. Because it turns out that a more honest and correct representation of our underlying data is almost always better engineering, and like all good engineering it will improve our bottom line if we use it correctly.

Many of us build systems that use descriptive markup either for ourselves or for paying clients. So, we have a common interest in tools and techniques and skills for building systems like that. Sometimes what we’re building needs complicated conditional logic, and it needs careful design and organization to make it work within the tight limits of the resources we have available. Katherine Ford and Will Thompson reported the other day on a project that gave an inspiring example of making effective use of the XSLT support that’s present in standard browsers today [Ford and Thompson 2018]. John Chelsom this morning told us how XForms can be used in building complicated systems that can out-perform, even without reference to price, much more expensive systems that are not based on descriptive markup [Chelsom and Chelsom 2018]. Steven Pemberton talked about how XForms 2.0 is going to make that kind of system even easier and even better, providing a stronger foundation for the systems we build [Pemberton 2018].

When we build systems in large organizations, we are necessarily going to have to talk to domain specialists and other people who do not think in angle brackets. Betty Harvey showed us a down-to-earth, effective technique that we can use to meet those users on their own ground [Harvey 2018].

When we build systems, for many of us XSLT is the core development technology; Vasu Chakkera showed us a very cool approach to improving our ability to maintain an XSLT infrastructure [Chakkera 2018]. And maintaining an infrastructure is very important as we learned from some of the talks about lifecycles of projects.

But our systems don’t include just programs and documents. The system includes the human beings who create and maintain the documents and use the system. And those human beings are going to need style guides, as Ari Nordström so persuasively argued [Nordström 2018]. Any system at some level includes those who build it, and we need to work on our own skills, so the kind of mental training that Abel Braaksma talked about this morning [Braaksma 2018], or the kind of Zen meditation on different ways of doing the same thing that Elisa Beshero-Bondar, David Birnbaum, and I talked about the other day [Birnbaum, Beshero-Bondar, and Sperberg-McQueen 2018] become relevant.

If you are lucky and build a system that is successful and gets used in a real situation, then it will follow the lifecycle of any successful system, and that means that after it has been used long enough it will eventually reach the end of its useful life. It was clear in Peter Lukehart’s account of the project related to the documents of the Accademia di San Luca that sometimes external factors play a huge role here [Lukehart 2018], a point which was also underscored by the first-person accounts from Jim Mason and Bob Yencha that same afternoon. [Mason and Yencha 2018] But one of the things it reminded us of is that our systems have to be maintainable if we want them to last a long time. And if we want them to last a long time, they need to be maintainable by people other than us, and they have to fit into larger ecosystems.

That means that if we want to build systems that will last a long time and that will outlast not just the software they were built with, but the people who built them, then we need to build our community. We need to make sure that there are other people with relevant skills who can take over the maintenance of those projects.

Niche technologies can have a role, but I think evolutionary biology tells us that if you want to survive in a niche, you have to defend that niche. You might want to expand beyond it, but you will survive in a niche only if the niche stays around.

Another salient property of communities is that by-and-large they have boundaries. I think there are a lot of cultures and languages that do have the concept of the community of all humanity or the whole world as a community, but it’s a rather unusual community. Most communities have boundaries. They have a definition, and one of the essential properties of definitions is there is an inside and an outside. There is in and out. There is us and them.

A distinction between us and them is not in itself alarming or harmful. It’s part of knowing who we are. One of the important intellectual achievements of a healthy infant is learning the difference between ourselves and our mothers, and later learning the difference between ourselves and the inanimate world around us. But, it is, of course, possible for a distinction between us and them to be exaggerated and turned into demonization. I hardly need to point out just how easy it is to find examples of that in recent developments in many of the countries of the world. And not just recent developments: it’s not as if demonization of outsiders were a new invention.

But even when there is no demonization involved, different communities can often feel as if they are in competition with one another. And competition can easily turn into some mutual suspicion, maybe some hostility, at least some friction. Before XML ever existed, there was a very clear distinction between people who were interested in the kinds of data that was typically managed with word processing systems and people who were interested in the kind of information that was typically managed with database management systems.

Murata Makoto used to say, There’s really only one interesting innovation in XML, and that is having a single notation that will work for both of those kinds of data so that we can communicate more easily between those groups. Now, personally, I was surprised when the database management people showed up in the XML Working Groups. And at dinner one night, I took the opportunity of sitting with a table full of SQL people, and I said, Don’t get me wrong. I’m very happy you’re here. But why are you here? What does XML offer you? They said, Well, exchange between databases. And I said, Comma-separated values don’t do it? And they said, Phtwww! [Much laughter.] No, comma-separated values were, at least as they told it, one of the single most expensive topics in consumer support calls, because no one writes comma-separated values parsers the same way twice. I don't mean just that no two different people write the same escaping rules and so forth, but that if the same programmer writes it today and then writes it again in two years – because, let’s face it, it’s going to be easier to write it again than find the code from last time – the programmer is not going to do it the same way. To my great surprise, the single thing I remember them mentioning as more important than anything else was that XML has a coherent character set story, while comma-separated values don’t. Okay, fine. I spent enough time struggling with character sets that I understood that.

Personally, I think the interest of the database management vendors in XML has been to our long-term benefit (not that they helped us with character set issues), but it’s true that the relationships were occasionally kind of fraught. I have sat in meetings where after a joint session with another Working Group, members of an individual Working Group retired to a separate room and besieged the Chair with the question Do we have to meet with them again? I am sick and tired … If he says that one more time … [Much laughter.] And there is a sense – this is, I guess, another example of how non-inevitable some things are – there is a sense in which the only reason XPath 2.0 managed to be the common language of XQuery and XSLT 2.0 was the strength of will of two Working Group Chairs who said, Yes, we have to meet with them again. Go back in there and listen.

Their motivations were not necessarily purely idealistic. There’s a perfectly good commercial explanation. Our customers are not going to thank us if we have two specs that are very close to each other but have conflicting stories about their string functions. Customers don’t care how irritating he was. They will care forever if one spec counts from zero and the other spec counts from one, or any of the other trivial, but crucial things that occupy Working Group discussions.

And similarly, I think, our relationship with browsers and those who make the browsers and those who are interested in and develop browser-oriented technologies like Javascript can be productive even if our relationship is not currently uniformly cordial. So I was happy that Steve DeRose talked about a way to make CSS more comfortable for us, to make CSS selectors a little more powerful [DeRose 2018]. I was grateful to Hugh Cayless and Raffaele Viglianti for showing us a tool that means – I’m so grateful to them for this – that I really don’t have to learn about HTML5 custom elements because they’ve taken the hit for me! I can just use their code [Cayless and Viglianti 2018]! [Much laughter.] Wendell Piez talked to us yesterday about the necessity of being able to ascend and descend the ladder of increasing and decreasing expressive power and the importance of not being proprietary about one particular piece of turf on that scale [Piez 2018]. When Steven Pemberton talked about invisible XML, he talked about a way to live more productively with the people on the other side of the border [Pemberton 2018].

Of course, our core concern as a community centered around descriptive markup is descriptive markup. And not just markup because, let’s face it, after a period of collaboration and coopetition, the data heads and the document heads are mostly going separate ways now. So, the use of a common technology is not in itself enough to seed a strong sense of community. We’re document people, many of us in this room, and so we care in ways that some database people whom I can think of (but will not name) don’t seem to care about making documents look good. So, we care when Tony Graham shows us how to detect problems in layout that come when you try to do lights-out formatting and how to detect them automatically and fix them automatically using XML tools [Graham 2018].

We care when Pradeep Jain and Joe Gollner show us a way to make better markup vocabularies [Jain and Gollner 2018]. And we want to stand up and cheer when Robert Beezer tells us that great success story for generic application-independent descriptive markup: reuse, single-source, multi-target publication through application independence! [Beezer 2018]. Yes, that’s what got me into this business in the first place! Thank you for that!

If we’re serious about being interested in descriptive markup and not just in XML, then we need to pursue to the ends of the earth if necessary the idea of data representations that help us tell the truth about our documents – the truth as we see it and understand it. We need to seek out opportunities to turn our understanding of markup inside-out. Ronald Haentjens Dekker and his colleagues gave us a wonderful opportunity to do just that in their introduction to the model of TAGML [Haentjens Dekker et al. 2018]. Elisa Beshero-Bondar and Raffaele Viglianti showed us a way to turn our understanding of TEI documents inside-out: A new way to think about the problem of integrating data from multiple sources, not by bulk-ingesting that data but by building bridges to it in its existing location [Beshero-Bondar and Viglianti 2018]. The kind of hyperlink-only document that they showed reminds me of nothing so much as the theory of hypertext as it was being enunciated in the 1970s and 1980s by Ted Nelson and other people. It’s nice to think that maybe HTML didn’t kill hypertext for good. [Much laughter.]

If we want to have our data representation accurately reflect the nature of the object that we’re representing, then we have to think hard about the nature of that object and how we describe it. Jacob Jett and David Dubin showed us just how complicated that can be, even with phenomena that we think are reasonably well-attested and well-understood [Jett and Dubin 2018]. People have been doing editions of works for quite a long time, but by golly, there is a lot complexity inside that box when you open it up.

We need self-reflection. We need to reflect on things more than just technical matters and just technique. Mary Holstege offered us a wonderful tour of ways to guide ourselves to think about things in different ways by consciously changing our metaphors and by consciously taking them a little too seriously to see what more we can learn from them [Holstege 2018]. Allen Renear not only reviewed for us the differences between rules-based ethics and consequentialist ethics as they might apply to questions of markup, but he concluded, in a move that will have warmed many hearts in the room, that the intense intellectual contemplation of problems, including the nature of text and the problem of how to express that nature in markup, falls into the category of theoria which is itself a residual benefit of our activity and provides a sufficient justification for all time of markup [Renear 2018].

Ethics is not just a question of contemplation. Tammy Bilitzky – although she talked about a lot of technical issues, and there was a lot technical meat there too – illustrated just how practical questions of ethics can be. When you write a web crawler, you had better behave ethically and considerately towards the webmasters whose sites you’re crawling, or you wil be repelled as an attacker [Gross et al. 2018]. So, ethics can have consequentialist justifications in very short-term measurements as well.

It often turns out that people who share technical challenges and share approaches to those technical challenges have chosen those approaches for reasons that, for want of a better word, I’ll call their ideals. It’s possible to be interested in descriptive markup for purely utilitarian, monetary reasons. But there is a striking correleation between activity in this area and the belief that markup can and does make the world a better place, both internally, through the intellectual contemplation that Allen Renear was talking about, and externally, because the use of markup that’s oriented to the domain and not to one particular application or processing scenario can make the information feel richer and more accurate and therefore suggest better ways to exploit our investment in it and allow more reuse. The use of application-independent generic markup can make it easier – can make it easier; it doesn’t guarantee anything – for individuals and organizations to own and control their own data, whatever we turn out to mean by owning data or at least controlling their own data as opposed to ceding control to software vendors or to service providers like Huge Nameless Printing Corporation.

The use of declarative semantics can make it easier to write complicated applications and make development both easier and thus more democratic in some ways … and cheaper. That tendency toward the redistribution of power, and the increase in autonomy for as many individuals and organizations as are willing to take it, is, I think, part of what leads Steven Pemberton to believe that declarative markup and declarative programming can help lead the web to its full potential [Pemberton 2018]. And the web, in the meantime, has become a really huge part of everyone’s world, so even if we don’t think of ourselves as living in the web all the time, we live in the web a lot of the time. It’s worth while to think hard about how to lead the web to its full potential. Those properties of generic and descriptive markup are also, I think, perhaps part of what motivates Bethan Tovey and Norman Walsh and those working with them – others involved in the Markup Declaration – to believe that the cultivation of descriptive markup has value to society that is worth trying to preserve and propagate [Walsh and Tovey 2018].

If …

Now, when communities form around shared ideals or shared interests, it’s not in itself necessary for everyone involved to be doing the same work. And when communities form around shared work, it’s not in itself necessary for everyone to share the same ideals or the same intellectual interests or the same commercial interests. But if those of us who depend commercially on descriptive markup want to continue making our livings by doing this kind of work, then we need to cultivate awareness among potential customers that this kind of work is worth doing and worth paying for. We need to cultivate our tools for building systems and our skills at deploying those tools for the benefit of our clients so that we can provide better service and gain more clients than our competition.

And if those of us who find intellectual satisfaction in the work want to continue devising and discussing better ways to support the deployment and exploitation of descriptive markup – better ways to support the useful representation of the texts we study – then we need places to report on our work and exchange notes with each other. And if those of us who believe that descriptive markup can help make at least parts of the world a better place want to use descriptive markup to help lead the web or information technology more generally or society more generally to its full potential, then we have plenty of challenges to meet. We will need places to cultivate our sense of community. I’ll tell you what: let’s meet back here in a year and exchange notes. We have work to do. Let’s go and do it!

References

[Beezer 2018] Beezer, Robert A. PreTeXt: An XML vocabulary for scholarly documents. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Beezer01.

[Beshero-Bondar and Viglianti 2018] Beshero-Bondar, Elisa E., and Raffaele Viglianti. Stand-off Bridges in the Frankenstein Variorum Project: Interchange and Interoperability within TEI Markup Ecosystems. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Beshero-Bondar01.

[Birnbaum, Beshero-Bondar, and Sperberg-McQueen 2018] Birnbaum, David J., Elisa E. Beshero-Bondar and C. M. Sperberg-McQueen. Flattening and unflattening XML markup: a Zen garden of XSLT and other tools. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Birnbaum01.

[Braaksma 2018] Braaksma, Abel. Easing the road to declarative programming in XSLT for imperative programmers. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Braaksma01.

[Cayless and Viglianti 2018] Cayless, Hugh, and Raffaele Viglianti. CETEIcean: TEI in the Browser. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Cayless01.

[Chakkera 2018] Chakkera, Vasu. Documentation of XSLTs with Code Intelligence. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Chakkera01.

[Chelsom and Chelsom 2018] Chelsom, John J., and Jay H. Chelsom. Scaling XML Using a Beowulf Cluster. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Chelsom01.

[DeRose 2018] DeRose, Steven J. Dynamic Style: Implementing Hypertext through Embedding Javascript in CSS. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.DeRose01.

[Ford and Thompson 2018] Ford, Katherine, and Will Thompson. An Adventure with Client-Side XSLT to an Architecture for Building Bridges with Javascript. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Thompson01.

[Graham 2018] Graham, Tony. Copy-fitting for Fun and Profit. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Graham01.

[Gross et al. 2018] Gross, Mark, Tammy Bilitzky, Rich Dominelli and Allan Lieberman. White Hat Web Crawling: Industrial-Strength Web Crawling for Serious Content Acquisition. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Gross01.

[Gutentag 2006] Gutentag, Eduardo. Intellectual property policy for the XML geek. Presented at Extreme Markup Languages® 2006, Montreal. In Proceedings of Extreme Markup Languages® 2006. http://conferences.idealliance.org/extreme/html/2006/Gutentag01/EML2006Gutentag01.html

[Haentjens Dekker et al. 2018] Haentjens Dekker, Ronald, Elli Bleeker, Bram Buitendijk, Astrid Kulsdom and David J. Birnbaum. TAGML: A markup language of many dimensions. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.HaentjensDekker01.

[Harvey 2018] Harvey, Betty. Using Excel Spreadsheets to Communicate XML Analysis. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Harvey01.

[Holstege 2018] Holstege, Mary. Metaphors We Code By: Taking Things A Little Too Seriously. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Holstege01.

[Jain and Gollner 2018] Jain, Pradeep, and Joe Gollner. A lite DITA+ model for technical manuals. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Jain01.

[Jett and Dubin 2018] Jett, Jacob, and David Dubin. How are dependent works realized? Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Dubin01.

[Lukehart 2018] Lukehart, Peter M. The Journey of The History of the Accademia di San Luca, c. 1590-1635: Documents from the Archivio di Stato di Roma into and out of XML. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Lukehart01.

[Mason and Yencha 2018] Mason, James, and Bob Yencha. Panel Discussion: Why successful XML/SGML projects are reimplemented or decommissioned. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018.

[Nordström 2018] Nordström, Ari. In Defence of Style Guides. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Nordstrom01.

[Pemberton 2018] Pemberton, Steven. In praise of XML. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Pemberton01.

[Pemberton 2018] Pemberton, Steven. XForms 2.0: What’s new. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Pemberton02.

[Pemberton 2018] Pemberton, Steven. On the Descriptions of Data: The Usability of Notations. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Pemberton03.

[Piez 2018] Piez, Wendell. Fractal information is. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Piez01.

[Renear 2018] Renear, Allen H. Markup ethics: Trolley problems for text encoders. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Renear01.

[Walsh and Tovey 2018] Walsh, Norman, and Bethan Tovey. The Markup Declaration. Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Tovey01.

[Usdin 2018] Usdin, B. Tommie. YAMC? Why are we here? Why are we here again? Presented at Balisage: The Markup Conference 2018, Washington, DC, July 31 - August 3, 2018. In Proceedings of Balisage: The Markup Conference 2018. Balisage Series on Markup Technologies, vol. 21 (2018). doi:https://doi.org/10.4242/BalisageVol21.Usdin02.



[1] I am grateful to Tonya R. Gaylord of Mulberry Technologies for transcribing this talk and supplying the descriptions of audience reactions. I have changed a few formulations in an attempt to make the written form easier to follow and reduce some of my more distracting verbal tics, but I have not attempted any more thorough revisions.

C. M. Sperberg-McQueen

Founder and principal

Black Mesa Technologies LLC

C. M. Sperberg-McQueen is the founder and principal of Black Mesa Technologies, a consultancy specializing in helping memory institutions improve the long term preservation of and access to the information for which they are responsible.

He served as editor in chief of the TEI Guidelines from 1988 to 2000, and has also served as co-editor of the World Wide Web Consortium's XML 1.0 and XML Schema 1.1 specifications.