Home Web demo Login About
logo
Evaluation data, methodology and results

Overview

We set two goals for overProof:

1To reduce the false-negatives (the misses) on a corpus search caused by OCR errors by at least 50%, and to reduce the false-positives (incorrectly returned results) by the same amount.

Rationale

  1. Most text corpus searching uses simple keywords (Google-style)

  2. Typical uncorrected OCR'ed text accurately represents between 65% and 90% of the different words (the vocabulary) used in an article. This means that for a typical single keyword search on uncorrected text, between 10% and 35% of articles which should match will not be returned. For a typical two-keyword search, this increases to between 19% and 57%.

  3. The goal of overProof is to halve the number of documents not found on a keyword search due to OCR errors. That is, a keyword search on corrected text should retrieve at least between 83% and 95% of relevant articles. This means that for a typical single keyword search on corrected text, only between 5% and 17% of articles which should match will not be returned. For a typical two-keyword search, this increases to between 10% and 31%.

An evaluation of search-recall improvement on a representative set of newspaper articles corrected by overProof is shown graphically below. Articles shown in green represent those only found after text correction. Full details here.

Single keyword

Two keywords

Uncorrected text
84% recall
 Corrected text
93% recall
 Uncorrected text
70% recall
 Corrected text
87% recall
 
2To improve the readability and reusability of OCR'ed text by removing at least 60% of OCR errors.

Rationale

The readability of OCR'ed text is very important. Many people will prefer to read clear, resizable and reflowable text rather than the original image. Corpus text will often be reused, for example, printed or copied into documents. Having good quality text that can be processed digitally increases the usefulness of the corpus.

An evaluation of word-correction improvement on a representative set of newspaper articles corrected by overProof is shown graphically below. Words shown in green represent those appearing correctly after text correction. Words shown in grey represent errors. Full details here.

Uncorrected text
81% word accuracy
 Corrected text
94% word accuracy

Datasets

Evaluation requires a large amount of "raw" and "ground truth" text from content of significant interest to humans.

Compiling typical and accurate "raw" and "ground truth" text is very difficult, as explained in detail below. To be "typical", the OCR text should have been produced from sources which will be used by the target audience. Results obtained by evaluating effectiveness on synthetic material produced by, for example, printing and scanning known "ground truth" text cannot be extrapolated to real world use. Our main interest is to evaluate overProof on 19th and 20th century newspapers, often digitised from degraded copies or old microfilm, so it is that material we must use for testing.

Similarly, it is important to evaluate effectiveness on text produced using a particular OCR implementation. Owing to the widespread use of ABBYY's FineReader, we chose an evaluation based on text generated by FineReader

We have formally evaluated overProof on 3 datasets:

  1. Dataset 1 - Sydney Morning Herald mostly corrected medium length articles, 1842-1954. A large amount of text, but poor quality "ground truth"
  2. Dataset 2 - Sydney Morning Herald completely corrected medium length articles, 1842-1954. A randomly selected subset of Dataset 1 containing 159 articles manually corrected to "ground truth".
  3. Dataset 3 - Article text from the Library of Congress Chronicling Americanewspaper archive. A randomly selected sample from 5 U.S. newspapers manually corrected to "ground truth".

Methodology

Each dataset consists of newspaper articles. We have 3 versions of the text for each article:

  1. The text as OCR'ed
  2. The "ground truth" version of this text, produced by human correction of the OCR text
  3. The text corrected by overProof operating on the OCR text

We perform 3 measurements on each dataset:

  1. Recall improvement. Finding an article in a search engine relies on the indexing of the correct article text. This measurement calculates the reduction in search misses achieved after article text correction. It does this by comparing the number of unique "ground-truth" words found amongst the OCR text with the number found amongst the corrected text.
  2. Readability improvement. The readability of article text is determined by the accuracy of the digitised text. This measurement calculates the reduction in the number of erroneous words after article correction. Note that whereas the recall improvement measurement requires that only one of possibly several occurrences of a "ground truth" word be accurate, this measurement considers how many occurrences are correct.
  3. Weighted-recall improvement. Similar to the first measurement, but also weights the importance of words based on their entropy. So, for example, a correction of common words such as the and one is much less valuable than a correction of a rare word which is more discriminating in selecting just a small number of articles. Exactly what rare words are varies from dataset to dataset. For example, in the SMH datasets, Sydney is a common, non-discriminating word, but a relatively rare word in the U.S. newspapers dataset. Also, despite being relatively common, Sydney may be a valuable discriminator when combined with other search terms, for example, Sydney Cricket Ground.

Testing is performed as follows:

  1. Words are extracted from each version by splitting the text on white space. Leading and trailing punctuation are discarded. Single character words, words starting with currency symbol or containing numbers and other non-alphabetic characters (other than hyphen and apostrophe) are discarded. Hyphenated words are changed to their non-hyphenated form. Words are changed to their lower-case form.
  2. The entropy of "ground truth" words is calculated for the dataset.
  3. A set of unique words appearing in each version (original OCR, ground truth and corrected) is created, recording the number of times they appear in each version
  4. Words from the ground truth version are checked for appearance in the original OCR and corrected version. For the recall measurement, only presence or absence is noted. For the readability/word-correction measurement, the number of occurrences of words is counted. For the weighted-recall measurement, the log of the occurrence count of the word in the article is multiplied by the word document entropy.

Datasets

Dataset 1 - Sydney Morning Herald mostly corrected medium length articles, 1842-1954

Background to the National Library of Australia's Trove newspaper archive

The NLA's Trove Digitised newspapers site contains over 10 million OCR'ed newspaper pages. Each page is zoned into its component articles. The public can correct the OCR, and as of December 2013, over 100M column-lines of text have been corrected. Although this seems a vast amount, only a few % of articles have any corrections, and a tiny percentage of articles have been fully corrected.

The NLA newspapers site search identifies articles with at least one correction, and for each article, it is possible to retrieve both the most recent version of the text (incorporating all corrections made to-date) and a complete history of corrected line pairs (showing the text before and after the correction and the date of the correction). Hence, by obtaining a copy of the current version of the text and then by backing-out corrections in reverse-date order on that copy, it is possible to recreate the original text, as OCR'ed.

Extracting original OCR and corrected content from Trove

We scraped search results of all medium-length (between 101 and 1000 words in length) news articles from the Sydney Morning Herald by year, for each year content was available, from 1842 until 1954, and then for each article with at least one correction, we crawled the corrections and the current text. If the number of correction lines was 85% or more of the total number of lines in the article, we considered the article as 'fully corrected'. This does not mean that for these documents that at least 85% of lines have been corrected, as the same line is sometimes corrected more than once. However, it does mean that humans have spent considerable time correcting the article. A subsequent test, Dataset 2 was more rigourous in selecting 'perfectly' corrected articles, but was necessarily based on a smaller sample of articles.

We chose the Sydney Morning Herald title because it is one of the longest-running titles in the NLA corpus, it is accessed more than any other title on the NLA web site, and its presentation, image-quality and content are typical of other titles in the corpus. We chose news articles (rather than family notices, detailed lists and results, or advertising) because they form the majority of all articles: 4.5M from a total of 5.8M. We chose medium-length articles because very few long articles have extensive corrections and because medium-length articles form a majority of all news articles (2.6M from a total of 4.5M).

Of the 2.6M articles examined, 169K had at least one correction, and 30,301 met the 85% correction lines criteria, and hence were added to this dataset.

Per-year files containing the line-by-line original OCR and human correction line pairs for these 30,301 articles are available here. Each file contains multiple articles. Each article starts with a header line identified as starting with *$*OVERPROOF*$* , and followed by original OCR text and current version text pairs, for example, the 1842 file starts like this:

*$*OVERPROOF*$* 12409236 year 1842 type Article title The Sydney Morn
lUHSr.1 r«10.vl TUS IMXHRÏOR.||@@||NEWS FROM THE INTERIOR.
(From our various Correspondents.)||@@||(From our various Correspondents.)
NEWCASTLE.||@@||NEWCASTLE.
A vnnY melancholy accident, occa ioncd by||@@||A very melancholy accident, occasioned by
the incautious use of fim-arini, ocuiirrcd here||@@||the incautious use of fire-arms, occurred here
on Saturday last, to one ot the privates of the||@@||on Saturday last, to one of the privates of the

The NLA newspaper article id follows the header line marker. In this case, it is 12409236, and can be used to view the article on the NLA web site, retrieve just the article text, and retrieve the full line-by-line correction details.

Article line pairs consist of the original OCR text and the current version at the time of the crawl (which may or may not have been corrected), separated by the six characters: ||@@||

Limitations of treating extensively corrected articles as "ground truth"

It soon became apparent that is it not possible to treat even extensively corrected versions of articles as "ground truth". Amongst the problems:

  1. Not all lines with errors have been corrected. Even with the 85% criteria, correctors often leave a heading or final paragraph uncorrected, or just miss 'obvious' errors in a line and don't correct the line.
  2. Human corrected lines inevitably contain uncorrected errors. For example, Thc, and in general, e erroneously OCR'ed as c are frequently not spotted by humans.
  3. Line boundaries are changed. Some correctors occasionally move text across line boundaries for no apparent reason. Sometimes this is minor, such as combining both parts of a hyphenated word on its starting line and removing the hyphen. Other times, it is less predictable, for example, see the extensive realignment performed on this article on 7 Oct 2010.
  4. Words are changed. Quite commonly names formed as, for example M'Donald are corrected as McDonald. Occasionally, newspaper typos are corrected, and words are deliberately changed rather than corrected.
  5. Content is added. Quite commonly, the OCR and zoning process misses text at the start or end of line, entire lines, or a paragraph, typically at the end of an article. An example in this article occurs in the corrected line starting E. A. Tyler - the two following lines were not OCR'ed, but were appended on to this line by the text corrector.

Such issues create many problems when attempting to evaluate the performance of an OCR correction process, both because the human "ground truth" is not totally truthy, and because the raw OCR is totally missing some content that appears in the "ground truth".

As a consequence, we noticed overProof consistently generating valid corrections which had not been made in the "ground truth", and also new text which had been added by human corrections to the "ground truth" which had no source raw OCR.

Because of the poor quality of the "ground truth" in Dataset 1, we abandoned attempts to produce a meaningful evaluation of correction performance. However, when we ran the evaluation described below on the other datasets on this dataset, we found a reduction in recall misses of about 54% and raw word error rate reduction of about 62%.

Whilst selecting articles with a large number of corrections helped to avoid partially corrected articles, it did bias the sample towards articles with poor-quality OCR, and hence reduce how representative this sample is of the corpus. We addressed this concern with Dataset 2.

Dataset 2 - Sydney Morning Herald completely corrected medium length articles, 1842-1954

Given the significant numbers of human correction errors in Dataset1, we selected a random subset of articles and examine and correct them to a "ground truth" standard. From the 169K SMH medium-length newspaper articles with at least one correction, we selected all articles whose NLA assigned article id ended in 3 zeros. There were 159 articles selected. We chose to sample all articles with at least one correction, rather than articles with 85% correction lines to avoid biasing the sample towards those articles with lots of errors.

Results

Evaluating the 159 articles containing 47,375 words gave the following results:

Uncorrected textCorrected text
Recall83.8%94.7%
False positives26.7%8.7%
Raw Error Rate18.5%5.0%
Weighted Error Rate16.2%5.9%

The calculation of recall miss and false positive reduction gives the following results:

Recall misses reduction67.1%
Recall false positive reduction67.3%
Raw word error reduction73.2%
Weighted word error reduction63.6%

That is, after correction, the number of documents missed on a keyword search is reduced by 67.1%, the number of false positives is reduced by 67.3%, and the raw number of error words is reduced by 73.2%.

Recall of uncorrected and corrected text - SMH data

Recall (%) of corrected and uncorrected text for each of the 159 articles sampled.

The solid red line divides those articles for which recall was improved (above the line) from those for which recall was degraded (below the line).

The green dotted line represents a 50% reduction in recall misses.


Reduction in recall misses - SMH data

Reduction in recall misses (%) plotted against the recall (%) of the uncorrected text

The red dotted line represents a 50% reduction in recall misses.


Word accuracy improvement - SMH data

Word error reduction after correction (%) plotted against the word accuracy of the uncorrected text (%).

The red dotted line represents a 60% reduction in word errors.

Dataset 3 - Library of Congress Chronicling America completely corrected medium length articles, 1871-1921

Although the SMH is representative of Australian articles, it is not necessarily representative of titles from other countries in content nor in print quality (the presses used by Colonial Australian newspapers were often second-hand discards from Britain).Hence we randomly selected articles from several issues and pages from 5 titles within the Library of Congress data archive. We selected issues from the randomly chosen batches nearest the winter and summer solstice, if available, otherwise from the start and end of the run.We selected 49 news-type articles from the first and/or second pages, containing approximately 18,100 words and hand-corrected each article with reference to the PDF image of the corresponding page.

The titles and dates sampled:

Results

Evaluating the 49 articles containing 18,106 words gave the following results:

Uncorrected textCorrected text
Recall84.0%93.6%
False positives23.6%8.5%
Raw Error Rate19.1%5.9%
Weighted Error Rate16.0%7.0%

The calculation of recall miss, false positive, raw and weighted error reduction gives the following results:

Recall misses reduction59.9%
Recall false positive reduction64.2%
Raw word error reduction69.0%
Weighted word error reduction56.2%

That is, after correction, the number of documents missed on a keyword search is reduced by 59.9%, the number of false positives is reduced by 64.2%, and the raw number of error words is reduced by 69.0%.

Recall (%) of corrected and uncorrected text for each of the 49 articles sampled.

The solid red line divides those articles for which recall was improved (above the line) from those for which recall was degraded (below the line).

The green dotted line represents a 50% reduction in recall misses.


Reduction in recall misses (%) plotted against the recall (%) of the uncorrected text.

The red dotted line represents a 50% reduction in recall misses.


Word error reduction after correction (%) plotted against the word accuracy of the uncorrected text (%).

The red dotted line represents a 60% reduction in word errors.