A so-called "meta-pipeline" or "meta-transformation", loosely, might be any application
of a transformation technology that is anything but the simple three-part
source/transformation/result arrangement. Yet even within a simple architecture, pipelines
will typically be made of pipelines, and transformations will include multiple "logical"
even "temporal" steps or stages, within both their specification(s) and their execution.
More complex arrangements are possible and sometimes useful. These include not only
pipelines of transformations in sequence (each one consuming the results of the precedent
one) but also pipelines with extra inputs, spin-off results, or loops, wherein (for
logic is produced in one branch that is then used to transform the results of another
Because XSLT is syntactically homoiconic (canonically expressed in the same notation
that it reads and produces, i.e. XML), it is a straightforward exercise to construct
pipeline whose transformation is itself generated dynamically. This is useful if we
know what XSLT we will want, until runtime. If we can specify inputs to produce a
transformation programmatically, we can delay its actual production until we have
An example is the header promotion transformation as described above – a transformation
of HTML data in which paragraphs (p elements) can be mapped into h1-h6 based on properties
either assigned to them (in the data) or accessible and measurable. This is not a
operation, but it can be achieved using pipelines in and with XSLT.
The difficulty is that such a transformation depends on an assessment of which
properties assigned to which paragraphs, separately and together, warrant promotion
(type) of paragraph. The particulars of this assessment may only be fully discovered
of the data itself. So a pipeline has to "inspect" and "assess" the data itself before
can produce its set of rules for handling it.
Thus, in a pipeline, header promotion can proceed in three steps: in the first step,
analysis of the data is conducted in which candidate (types of) block-level or
p elements are selected and bound to (different levels of) header elements.
In a second step, this analysis (result) is fed to a generic "meta-transformation"
produces a one-time use XSLT specifically for the data set. The third step is the
application of this one-time custom-fit XSLT to the data, matching elements appropriately
produce headers from the
p elements as directed.
As noted, HTML's lack of any kind of structural enforcement over its element set,
very advantageous here. A header promotion transformation can litter the result file
- h6 elements, all without (much) concern either for formal validation or for predictable
behavior in tools.
To be sure, such raw data may not be ready to bring into a structured environment,
will not permit such a free representation: but then, that is the point. The inference
div or section boundaries, once headers are in place, is another fairly
straightforward operation – when the data warrants it.
Other similar examples of pipelines, metapipelines and multi-stage pipelines can be
mentioned, including pipelines
Producing diagnostic outputs (document maps, error reports etc. etc.)
Referencing external (exposed) configurations or "drivers" to simplify
Enriching data sets (e.g. content type inferencing) by reference to rule sets,
external authority files, or other criteria