Finding the Tipping Point in Automated Markup During Up-Translation
Up-translation can be accomplished using both automation and manual techniques. For most complex content, complete automation can introduce errors or miss content that requires tagging, but manual tagging is time consuming and potentially error prone. The best results can be obtained by finding a middle ground between automation and manual tagging. However, finding that middle ground is, itself, a challenge, and addressing that challenge requires a careful balancing act of investing in software development for automation, automatic flagging of suspect cases for manual review, and designing a tagging and quality-assurance workflow that is both robust and efficient. This paper discusses the inevitable inconsistencies, ambiguities, and “gotcha” moments that are encountered when up-translating scholarly manuscripts to models such as JATS and BITS, and provides recommendations for balancing automation with manual review.