How to cite this paper
Kimber, Eliot. “DITA Document Types: Enabling Blind Interchange Through Modular Vocabularies and Controlled
Extension.” Presented at Balisage: The Markup Conference 2011, Montréal, Canada, August 2 - 5, 2011. In Proceedings of Balisage: The Markup Conference 2011. Balisage Series on Markup Technologies, vol. 7 (2011). https://doi.org/10.4242/BalisageVol7.Kimber01.
Balisage: The Markup Conference 2011
August 2 - 5, 2011
Balisage Paper: DITA Document Types: Enabling Blind Interchange Through Modular Vocabularies and Controlled
Senior Solutions Architect
Really Strategies, Inc.
Eliot Kimber has been working with generalized markup for more than 25 years. He was
a founding member of the XML Working Group, an editor of the HyTime standard (ISO/IEC
10744:1992), and a founding and continuing voting member of the OASIS DITA Technical
Committee. For the last few years Eliot's focus has been on applying DITA to the information
representation and management challenges of professional publishers. Eliot writes
and speaks regularly on XML and related topics.
Copyright © 2011 W. Eliot Kimber
Interchange of XML documents depends in large part on the use of "compatible" vocabularies
of element types and attributes, where "compatible" means "understandable and processable
by all parties involved in the interchange". The traditional SGML and XML approach
to interchange used "interchange" document types that defined a fixed set of element
types and attributes to which all interchange parties agreed. History has demonstrated
conclusively that this approach does not work. The DITA standard, which is expressly
designed to enable blind interchange of DITA documents over the wisest possible scope,
avoids this failure by turning the problem around. Rather than making the unit of
vocabulary definition the document type and then allow unconstrained extension to
it, it makes the units of vocabulary definition invariant modules which are combined
by documents to form complete document types. Extension is allowed through two controlled
facilities: constraint modules and specialization. The constraint and specialization
facilities serve to ensure two preconditions for blind interchange: (1) All DITA documents
are inherently and reliably processable to some degree by all general-purpose DITA
processors irrespective of the markup details and (2) non-general-purpose DITA processors
can quickly determine, from document instances alone, whether or not a given document
may contain elements and attributes it does not know how to process. It is this aspect
of DITA that distinguishes it from all other XML applications and in particular from
traditional "interchange" document types based on monolithic DTDs that allow unconstrained
(and unconstrainable) extension and customization. This paper presents the details
of the DITA vocabulary and constraint module system and how that mechanism serves
to ensure smooth and reliable blind interchange of documents. It makes the argument
that the DITA vocabulary module and constraint approach, if not the DITA-specific
implementation details, could be applied to any markup application domain and thus
to any tag set.
Table of Contents
- Why Traditional Approaches to Interchange Do Not Work
- DITA's Approach: Modular Vocabulary Composition and Controlled Extension
- Document Types are Unique Sets of Modules
- Vocabulary Extension through Specialization
- Reuse and Document Type Compatibility
- DITA Self Description and Processing
- Applying the DITA Approach More Widely
SGML and XML have always been about interchange, starting with the business problem
of interchanging typesetting data sets between publishers and printers and evolving
into the promise that document data could be interchanged widely among different users
of the data because it was in XML.
The implicit, and sometimes explicit, promise, or at least vision, of SGML and, later,
XML, was blind interchange: the ability for one party of an interchange to use data provided by
the other party using only the information in the document itself along with shared
common knowledge, such as understanding of the standards to which the data conforms,
without the need for any additional knowledge to be directly communicated from the
sending party to the receiving party. In other words, the sending party can, for example,
make data available for licensing to anyone who wants to license it and licensees
can get the data knowing that they will be able to use it immediately, without the
need to adapt either the data or their processing infrastructure to accommodate the
This vision has gone largely unrealized because the SGML and XML community had not
devised a way to allow local definition of specific vocabulary while ensuring both
understandability and processability of data. You could either have precise markup
optimized for local requirements or you could have general-purpose markup that could
be processed by many potential consumers, but you couldn't have both. This was, and
continues to be, a problem that limits the value of XML because it adds significant
cost to the use of XML where interchange is a requirement (and interchange is always
a requirement, even if that interchange is with only with your future self). An obvious
example of this cost is the complex system of validation and data normalization transforms
created and maintained by PubMed Central in order to manage the publishing of journal
articles from many publishers into the PubMed repository. This cost is entirely avoidable.
The DITA architecture provides a general way of thinking about and using modular vocabulary
that enables interchange at the lowest cost (and lowers the cost of new vocabulary
development and support of that vocabulary) in a way that no other standard for XML
DITA's general approach could be adapted to other XML vocabulary domains and thus
give them the same advantages. As discussed in the last section of this paper, while
it is technically possible to apply DITA-based markup to essentially any set of documentation
structuring requirements, there are many reasons why that is not going to be a viable
solution for many communities. But DITA's architectural ideas can, and I assert should, be applied to non-DITA-based XML vocabularies in order
to realize the same value that DITA provides in terms of reducing the cost of interchange
and generally minimizing the cost of using XML. That is, I am not suggesting that
DITA is the solution for everyone, I am only suggesting that DITA's way of thinking
about document types and vocabularies is powerful and interesting and useful. I am
asserting that DITA's approach to enabling interchange works where no other approach
In this paper, the term "community of interchange" means the expected set of partners
involved in the interchange of a specific set of documents or document types. Different
communities have different scopes, from small communities limited to a few authors
or products within a small software company all the way to essentially unbounded scopes,
such as all potential users of journal articles.
From a business standpoint, my day-to-day concern is with the business challenge of
very wide scope interchange as done or desired by Publishers, where the value of a
given unit of content is determined largely by the number of different consumers who
are willing to buy it or the number of different channels through which it can be
published. From this standpoint, the cost of interchange is an important determiner
of overall value of Publisher's primary business output, information. The less it
costs others to acquire and use the content, the greater the value of that content
and the larger the potential market for it.
This paper focuses in particular on "blind" interchange. By blind interchange I mean
interchange that requires the least amount of pre-interchange negotiation and knowledge
exchange between interchange partners. In the ideal scenario, Parter A is able to
say to Partner B "here is data in format X, a format we both know and understand"
and Partner B is able to reliably and productively use the data from Parter A in non-trivial
ways without the need for any more communication with Parter A.
Why Traditional Approaches to Interchange Do Not Work
SGML introduced the formal "document type definition" (DTD), which allowed communities
of interchange to define sets of elements that would be allowed in the documents used
by that community. Because DTDs are formal, machine-processable definitions, documents
could be validated against a document type to determine conformance to the rules of
the document type. The general assumption or hope or implication was that documents
that conformed to a given DTD would then be reliably interchangeable within the community.
In this document I use the term "DTD" as shorthand for "any declarative document constraint
language or grammar", including DTDs, XSDs, and RelaxNG, unless otherwise specified.
This approach to interchange does not work and cannot work because it is too simplistic.
It ignores a number of factors that serve to impede interchange over any non-trivial
No fixed set of element types can ever satisfy all the requirements of all participants,
for at least two reasons:
The set of requirements across all participants is always greater than the time available
for analysis and implementation. That is, at some point you have to draw a line and
say "good enough" and get something working.
New requirements always appear, meaning that even if (1) was not true, there would
always be times when new requirements had yet to be analyzed and implemented.
DTD validation is not sufficient to enforce all important rules to which documents
must conform. Thus there will always be processing or business processes that depend
on data rules that cannot be enforced by DTDs. This applies to any non-procedural
document constraint mechanism, including XSDs, RelaxNG schemas, and so on.
Parties to interchange seldom, if ever, author their content in exactly the form in
which it will be interchanged. Most enterprises have some need for internal-use-only
markup of some sort, such as for proprietary information that isn't shared or markup
that is specific to internal business processes or information systems. This means
that there will almost always be a difference between document types as used within
the private scope of an interchange partner and document types as used for exchange.
This implies the need for a way to express and maintain the relationship between the
internal-use markup rules and the interchange markup rules, ideally in a machine-readable
and processable form.
There is a tension between simplicity of markup design and precision of semantic identification.
XML's value derives at least in part from its ability to bind semantic labels to information
as precisely as required. However, the more element types you have (increased semantic
precision), the more complicated the document type becomes, increasing the cost of
learning, using, supporting, and maintaining it. In addition, more element types increases
the opportunity to do the same thing in different ways. Likewise, trying to maintain
simplicity by keeping element types generic can make it harder to know how to mark
up specific kinds of information, leading to inconsistent practice for the same type
of information. In the context of interchange, where an interchange DTD must be the
union of the requirements of the interchange partners, the sheer size of the resulting
document type tends to weigh heavily in favor of generality in order to avoid the
complexity cost of having a huge number of more-precise element types or the analysis
cost of determining a set of more-precise element types that reasonably satisfy the
requirements of all interchange partners.
Because DOCTYPE declarations are properties of documents, there is no way to guarantee
that a document that points to a particular external declaration set by use of a particular
public identifier or URN or absolute URL does not either add new element types and
attribute unilaterally or otherwise reconfigure the base declarations through local
parameter entities. Using a particular public or system ID for the external declaration
set does not guarantee that the actual declarations used match any other copy of the
declaration set. This means that DOCTYPE declarations, in particular, cannot be relied
upon to indicate what may or may not be encountered in a given document. XSD schemas
and other constraint mechanisms that are not part of the document itself are a bit
better, in that a document cannot unilaterally modify an XSD or RelaxNG schema used
XML, unlike SGML, does not require the use of DOCTYPE declarations or any other document
grammar definition or constraint definition (XSD schema, RelaxNG schema, etc.). Thus,
systems should be architected so as to accommodate documents that do not themselves
explicitly contain or point to some sort of schema. In SGML that was at least the
requirement that a document declare some sort of document type, even if it was actually
meaningless and unreliable to do so. XML correctly abandoned that notion. But it means
that it must be possible for a document to carry all of the information needed for
a processor to associate the appropriate typing and constraint checking to the document.
Namespaces can partly serve this but are not sufficient because a namespace declaration
tells you nothing about the vocabulary to which that namespace might be bound (and
namespaces were never intended to do so). Namespaces simply serve to make names globally
unique and, as a side effect, provide the opportunity of associating schemas with
elements by locally associating schemas with namespaces. But that namespace-to-schema
association is entirely a property of the local processing environment—it is not a
property of the document itself.
In short, the idea that you could define single DTDs for non-trivial documents that
would then enable blind interchange was naive at best and disingenuous at worst. The
SGML and XML community has proven through painful experience that monolithic "interchange"
DTDs simply do not work to enable smooth, blind interchange. Interchange can be done
but at significant cost and impedance. With transforms and documentation and countless
person hours spent in analysis and DTD design, people have been able to get documents
interchanged but not in a way that could be in any way characterized as "blind". In
general where interchange has been made to work, it has involved some combination
Limiting the specificity of markup such that the DTD can be reasonably documented
and implemented but the markup lacks the precision needed for many requirements, including
both important semantic distinctions as well as DTD-level enforcement of rules.
Limiting the scope of the markup to limit the size and complexity of the document
type, necessarily creating gaps between requirements of individual interchange partners
and the interchange DTD.
Building complex and expensive-to-maintain validation applications that attempt to
check rules that cannot be enforced by the DTD (because of the generality constraint).
Building complex and expensive-to-maintain data cleanup or data transformation systems
that attempt to normalize data from different sources into a single consistent form.
Either disallowing all extension to ensure 100% consistency of markup across documents
(S1000D) or allow unconstrained extension (DocBook, NLM, etc.). In the first case
either the DTD must grow to be a true union of all known requirements or some participants
will necessarily not have important requirements met, usually forcing these partners
to use a custom DTD internally in order to meet local requirements. In the second
case, the use of extended DTDs is essentially the same as the use of arbitrary custom
DTDs as there is no way to know how any two extensions relate either to each other
or to the base DTD simply by inspection of either the DTD declarations or document
All of these reactions tend to limit the utility of interchange DTDs relative to the
cost of simply mapping to and from the individual document types of the various interchange
partners. Where interchange has been made to work at all it is usually because one
of the trading partners has a significantly more powerful role relative to the other
partners, for example Boeing or Airbus relative to their suppliers or PubMed Central
relative to individual publishers. These large players can both impose constraints
and business rules and also fund and maintain the tooling needed to enable interchange.
But where the interchange community does not have such a major player, then the value
of the interchange DTD is much lower. The example here would be DocBook, where you
have many relatively small users using DocBook with no single large player imposing
some particular consistency of use across the many users of DocBook. Because DocBook
both reflects a wide range of requirements and a general approach of generality over
specificity, the range of application of DocBook to the same information challenge
is quite wide. In addition, DocBook allows unilateral and unconstrained extension
from the base DocBook DTD. This means that there is no formal or automatic way to
know how any two DocBook-based documents relate to each other. (To be fair, blind
interchange was never an explicit or primary requirement of DocBook as a standard.)
While an invariant monolithic DTD that satisfies all requirements of all interchange
partners would enable this form of blind interchange, it is obvious that such a DTD
is impossible in practice. Likewise, unconstrained extension as provided by DTDs like
DocBook and NLM don't enable blind interchange because at a minimum Partner A has
to transmit knowledge to Partner B about how Partner A's DTD differs from the base
standard and, quite likely, provide processing components for processing that content.
At that point, there is no useful difference between using the standard for interchange
and using arbitrary DTDs, as the difference is only one of degree, not kind (do I
have to transmit a lot of knowledge or a little knowledge?): in both cases you have
to describe the markup details and, usually, provide supporting code or, lacking code,
impose on Partner B the cost of implementing support for the new or different markup.
Likewise, there is no guarantee that the data from Partner A will be composable with
the data from Partner B such that B could re-use data elements from A directly in
their existing documents, either by copy or by reference.
DITA's Approach: Modular Vocabulary Composition and Controlled Extension
The DITA approach flips the solution around. Rather than attempting to enable interchange
by imposing constraints top-down through a monolithic document type that attempts
to reflect the union of known requirements that can be met within the scope available,
it imposes constraints bottom up in a way that enables local extension and configuration
while ensuring that all conforming DITA documents can be usefully and reliably processed
by all conforming general-purpose DITA processors. In the best case, interchange among
partners requires nothing beyond the agreement to use conforming DITA documents. In
the usual case, partners only need to interchange knowledge of locally-defined markup
where that markup is of interest to the receiving partner and only need to interchange
supporting processing where the locally-defined markup requires unique processing
(that is, its semantic distinction also implies or requires a processing distinction
of some sort).
DITA does this by moving the focus from document types to vocabulary "modules" assembled
or composed into distinct document types, coupled with two controlled extension mechanisms:
constraints and specialization.
DITA defines a general architecture to which all DITA documents must conform and from
which all DITA vocabulary must be derived. However, DITA does not limit the ability
to define new vocabulary or combine existing vocabulary modules together in new ways,
as long as the new vocabulary or module combination conforms to a few constraints.
A document is a conforming DITA document if the vocabulary it uses conforms to the
DITA architecture, irrespective of the specific element types used in the document.
There is no sense in which DITA defines "a" document type. Rather, it enables an infinite
set of interoperable document types based on a base architectural model. It also enables
automatic comparison of document types to determine "compatibility" and therefore
the degree to which content from one DITA document may be directly used by reference
from another DITA document.
All DITA vocabulary is based on a small set of base types that define the basic structural
and semantic rules for DITA documents. The base types are sufficiently general to
allow almost unconstrained variation in how they are adapted to specific markup requirements so
that there will always be a way to meet any reasonable markup requirement. In order
to make the mechanism work the DITA architecture must impose a few arbitrary rules
but those rules are not particularly limiting.
While the DITA standard defines a number of vocabulary modules in addition to the
base DITA types, these modules are not mandatory (no conforming DITA document is required
to use or reflect any of these modules). Likewise, the various document type definitions
defined by the DITA standard or provided by the DITA Technical Committee are not mandatory
and simply serve as examples or convenience for DITA users. So while the DITA standard
may appear at first glance like just another monolithic DTD standard, it is absolutely
Document Types are Unique Sets of Modules
In DITA a "document type" is nothing more or less than a unique set of vocabulary
and constraint modules. Each DITA document declares, through an attribute on the document
@domains), what set of modules it uses. Two documents that declare the use of the same set
of modules have the same document type. DITA does not require the use of any particular
schema technology nor does it require the use of any particular document constraint
specification or grammar. That is, DITA does not require the use of DOCTYPE declarations
or XSD schemas or any other form of formal constraint specification.
DITA document types are composed of two types of module: vocabulary modules and constraint
modules. Both types of module have globally unique names (or at least unique within
the expected scope of interchange) and are invariant.
Modules are invariant in that modules may not be modified directly, meaning that all
copies of a given module should define exactly the same set of element types or constraints.
There is no sense in which a given module may be directly extended in the way that
say the DocBook or NLM DTDs can be extended. It means that you don't need the actual
declarations of the module, in whatever schema language they might exist in, you only
need the documentation for the module to know what rules the use of the module implies. Compare this with
DocBook, where knowing that two documents are "DocBook" documents tells you nothing
reliable about what rules those two documents reflect, because there is no way to
know, without inspecting the actual declaration sets used by the two documents, what
markup rules are actually defined.
Vocabulary modules define element types and attributes. Constraint modules define
modifications to content models and attribute lists in vocabulary modules in order
to make those content models more constrained than the base. There is no mechanism
in DITA by which content models can be made less constrained than the base model (that is, you cannot unilaterally add new element
types in places where the type—or its base type—is not already allowed).
This restriction ensures two things:
All general DITA processors are assured they will never encounter element types they
do not understand nor will they encounter combinations of elements that are completely
For any two documents of the same base type, the elements in the more-constrained
document will always be compatible with the elements in the less-constrained document.
In the case where both documents have "compatible" document types, the contents of
both documents are always compatible with each other.
Because there can be no surprises about the structures that might be encountered in
a given DITA document and because vocabulary and constraint modules are invariant
it is therefore sufficient for documents to simply declare the names of the modules
they use. There is no direct value in having literal declarations for those modules
in order to enable processing, only to enable syntactic validation and authoring convenience.
At least in theory, a DITA processor could have built-in knowledge of all markup details
for a given set of modules and thus do validation without reference to specific DTD
or XSD declarations. Of course in practice it's easier to just implement the schema
using normal XML tools. But it is not required.
Because of the
@domains attribute, along with several DITA-defined declaration attributes (
@class chief among them), all conforming DITA documents are completely self-describing with
regard to the information needed to understand and process those documents in terms
of their DITA nature and the base semantics defined in the DITA standard. Thus, DITA
documents do not need any form of document type declaration other than the
@domains attribute on the root element and
@class attributes on each element.
Vocabulary Extension through Specialization
The modular vocabulary mechanism addresses one part of the interchange problem: How
to know if you can process or reuse a given document?
However, is does not by itself address the problem of how to add new vocabulary in
a way that does not break existing processing or require communication of new knowledge.
DITA addresses this second problem through the specialization facility.
The DITA specialization facility allows new element types and attributes to be formally
derived from base types, where all new elements must be ultimately specialized from
one of the fundamental base types defined by the DITA standard. Thus all DITA vocabulary
represents a type hierarchy with a common ancestry. This is roughly analogous to a
language like Java where all object classes ultimately subclass one of the fundamental
classes defined by the Java language.
The main constraint imposed by specialization is that specialized element types and
attributes must be at least as constrained as their direct ancestor type. For example,
given the base type
<foo> with a content model of
(a | b | c)*, the element type
<bar> that is a direct specialization of
<foo> may have as its content any content model from "EMPTY" (as all of the items are optional),
to any one of the items
<c>, to the content model
(a, b, c), and of course the same content model as the base:
(a | b | c)*. It cannot, however, have a content model of
(a | b | c | d), because that would be less constrained, unless the element type
<d> is itself a specialization of one of
<c>. Likewise, if the content model of
<foo> required an element,
<bar> would not be able to make that element optional. Finally, the content model of
<bar> could consist entirely of elements that are specializations of any of
The fact that one element is a specialization of another is defined through the common
@class attribute, which lists the entire specialization hierarchy of the element. For example,
if the element type
<foo> is a specialization of
<p> from the
topic vocabulary module, then the
@class value of the
<bar> element would be:
"- topic/p module-1/foo module-2/bar ", specified on each
<bar> instance like so:
<bar class="- topic/p module-1/foo module-2/bar ">
The value of the
@class attribute is a sequence of module-name/element-name pairs, where the module names
correspond to the names of modules as declared in the root element's
@class attribute thus ensures that any DITA-aware processor will be able to understand any
element at least in terms of its base type (
topic/p in this case), if not in terms of its more specialized types. That means that all
DITA documents can be usefully, if not always optimally, processed by any general-purpose,
specialization-aware DITA processor.
Because general-purpose DITA processors can usefully process any DITA document, regardless
of how specialized or constrained it is, it means that DITA documents can be blindly interchanged among interchange partners who have such processors with full assurance
that they'll always be able to do something with those documents, even if their processors don't understand specific specializations
in the content.
Reuse and Document Type Compatibility
The point of interchanging DITA content is not just to be able to process it but to
do something new and interesting with the content interchanged. That is, we can presume
that you have acquired the DITA content so that you can then do something more with
it than simply generate various outputs, since you could have simply requested the
outputs themselves and saved the expense of generating them.
DITA provides two mechanisms for combining content into new publication structures:
maps and content references.
DITA's map mechanism uses documents consisting of only hyperlinks (DITA maps) to organize
maps and content objects ("topics") into arbitrary hierarchies. The only requirement
for the use of a given topic by a given map is that the topic be a valid DITA topic
document. The vocabulary and constraint details of the topics used by the map are
not relevant. (Maps can also use non-DITA resources, whether XML-based or non-XML
but of course DITA can only guarantee interoperability of DITA documents.)
DITA's content reference mechanism ("conref") uses direct element-to-element links
within topics (or maps) to establish use-by-reference relationships such that the
referenced element becomes the effective value of the referencing element. It is similar
to, but more sophisticated than, XInclude and similar link-based use by reference
Because conref results in new effective documents (either virtual or literal depending
on how the processing is implemented), DITA imposes constraints on what may reuse
what in order to ensure, as much as possible, that the effective result is valid in
terms of the constraints governing the referencing context. In particular, a document
can only use content that is at least as constrained and at least as specialized as
itself (meaning that a more-constrained document cannot use content from a less-constrained
or less-specialized document). This rule is defined in terms of the vocabulary modules
used by both documents, not in terms of the individual content models or instance
Thus, while there may be cases where less-constrained data would in fact be valid
in a particular use instance, DITA does not attempt to determine compatibility on
a per-use-instance basis. Not only would that require access to formal definitions
of all the content models involved, it would require complex and expensive processing
for only marginal gains in practice.
Rather DITA uses the module use declarations in the
@domains attribute to compare the modules used in two documents to determine if they are compatible
for the purposes of content reference.
DITA Self Description and Processing
In the most extreme case of blind interchange a system is presented with a document,
with no advanced preparation or configuration for the document's specific document
type, with the expectation that the system can first determine what type of document
the document is and second apply appropriate processing or determine and clearly report that it is either not able to determine the document type or recognizes the document
type but determines that the document uses facilities or has content that it does
not know how to process.
For example, XML content management systems should be capable of attempting ingestion
of any XML document of any sort and, when the document is recognized as being of a
known type, automatically do whatever is necessary to ingest and manage that document.
For this level of automation to be possible, documents and their components must be
sufficiently self descriptive so as to allow a system to determine the document's
type and, where appropriate, understand how to handle individual elements and attributes
within the document. XML documents are inherently self-describing as XML as long as
they have an XML declaration but beyond that more is required. If we accept that requiring
the use of a DOCTYPE declaration or XSD schema reference or any other form of reference
to an external schema is neither acceptable nor possible to enforce in the general
case, then it follows that any self-description mechanism must use normal document
All DITA documents are self describing in several important ways, none of which depend
on the use of any form of document schema or grammar declaration. The self-description
@dita:DITAArchVersion attribute, which is required on the root element of all DITA documents. This attribute
nominally specifies the DITA version the document conforms to (1.0, 1.1, 1.2, etc.)
but really serves as an excuse to declare the DITA namespace on the root element.
This serves to unambiguously signal that this document is a DITA document.
@domains attribute specifies the set of vocabulary and constraint modules used by the document
and how they relate to each other. This attribute serves to define the document type
of the document (that is, a unique set of modules) and enables comparison of document
types for compatibility in order to enforce content reference rules or to determine
whether or not a given processor understands all the vocabulary modules used in the
document (and therefore provide clear reasons for not being able to process a given
document when it does not understand all the modules).
@class attribute specifies, for each element instance or specialized attribute, it's place
within the DITA type hierarchy. This allows processors that understand at least all
the base DITA types to process all DITA elements, no matter how specialized, in terms
of the base DITA types.
All DITA documents must have as their root element one of the element types
<topic> or a specialization of
<dita> element is the one DITA element type that cannot be specialized).
Taken together, these three self-description mechanisms mean that general-purpose
processors can be constructed without too much effort that can do the following when
presented with any conforming DITA document with all attributes explicit (or defaulted
via a DTD or XSD), no matter how specialized or what set of modules it uses:
Determine that the document is a DITA document, by looking for the
@dita:DITAArchVersion attribute. If this attribute is found, the document must be a DITA document.
If there is no
@dita:DITAArchVersion attribute on the root element but there is a
@class attribute whose value matches the pattern for DITA
@class values and the root element is
<topic> or a specialization of
<topic>, the document is almost certainly a DITA document.
If the document is a DITA document, apply general DITA processing to the document,
whatever that processing might be, with no additional configuration, with the assurance that, if the document is a valid DITA document, that the processing
Note that "valid" in this case doesn't mean "schema valid" it means "conforms to the
requirements of the DITA standard", not all of which can be enforced by a schema even
when one is used.
With the possible exception of HTML, there is no other XML application for documentation
that allows both action (3) and allows definition of arbitrary new vocabulary.
Applying the DITA Approach More Widely
The DITA modular vocabulary and specialization facilities could be applied to any
XML application regardless of domain of application. While the particular implementation
details used in DITA 1.x of
@class may not be appropriate for other XML applications, the concept of modular vocabularies
and controlled extension could be applied. In particular, the DITA 1.x
@class attribute syntax does not accommodate namespaces in any satisfying way (other than
requiring the use of specific prefixes). Any more general class-like mechanism would
need to use a syntactic approach that fully supports namespaces. Designing such a
mechanism will be a major focus of the DITA 2.0 activity (unless the DITA Technical
Committee arrives at a solution that is backward compatible with the current 1.x syntax
as part of the DITA 1.3 development activity under way now).
The main practical challenge is applying the necessary constraints to existing markup
design where the markup was likely not originally designed either as a hierarchy of
strictly-related types or to enable specialization by avoiding unnecessary constraints
in base types. Because DITA must impose some constraints on content model design,
it is usually not possible to take non-trivial document types and simply add
@class attributes to the existing element types and make them conforming DITA documents—there
will almost always be some amount of adjustment to structural models required. Even
in the case where the requirement is not to conform to existing DITA structural models
but simply to enable specialization from some set of base models reflecting the design
of the existing document type, it will likely be the case that element types that
are conceptually specializations of a newly-defined based type will not be structurally
compatible with the base types, requiring adjustment or normalization of content models.
This means that applying DITA directly or a DITA-type architecture will usually require
creating a new non-backward compatible version of the existing document type. As in
any such endeavor, the cost must be balanced against the benefit, but I assert that
the potential benefit is quite large.
DITA's out-of-the-box vocabulary, while well adapted to the specific requirements
for which DITA was originally designed (authoring and production of modular technical
documentation optimized for online and dynamic delivery), is certainly not limited
to that particular use and can be usefully applied to essentially any type of document
or documentation. Therefore, new XML standards could be DITA-based without limiting
their ability to define appropriate vocabulary. That is, one can assert that there
is no reason not to make all new documentation-related XML document types DITA-based.
However, there are well-established applications in specific subject or application
domains, such as NLM, that reflect person-decades of markup design and that are well
established within their communities of use. It would be counter-productive to suggest
or require that those communities replace their existing document types and infrastructure
with a new DITA-based application just to get the interchangeability of DITA. It makes
more sense to adapt the DITA concepts of modular vocabulary and specialization to
those existing applications, preserving the existing investment in knowledge and infrastructure
while lowering the cost of implementation and interchange, raising the value of the
content that uses those vocabularies.