We start from the assumption that a digital edition must, in some meaningful way, be digitally readable. It is not sufficient for a digital text to be intelligibly displayed for human readers: it must be susceptible to some form of computational analysis.
An important part of our editing process is the systematic use of automated validation and machine-assisted verification. We use automated tests to ensure that we maintain consistent editorial standards and encode the text in a manner supporting digital analysis. Automated validation tests check the syntactic correctness of index tables and XML markup, the referential integrity of our indexing and citation units in our textual editions, and editorial standards for XML markup and character set usage. Validated texts can be tokenized, tokens classified, and lexical tokens can be morphologically analyzed.
In verification, teams review visualizations that allow us to proof aspects of our work that cannot be automatically validated. The main uses of verification to date has been to check the completeness of coverage of our edition, and to check the correctness of indexing of passages of text to images.