Making your own rules for use with Ubiqu+Ity

Several years ago, Michael Witmore and Jonathan Hope published a paper in Shakespeare Quarterly that describes how the string-matching rhetorical analysis software DocuScope is able to identify stylistic fingerprints of genre in Shakespeare’s plays. Visualizing English Print is proud to make the string-matching rules used by DocuScope available online for general use as part of the multivariate textual analysis package Ubiqu+Ity.

Ubiq landing page

The DocuScope dictionaries, which were initially designed to analyze rhetorical features such as persuasiveness or first-person reporting, covers 40 million linguistic patterns of English classified into over 100 categories of rhetorical effects (see http://www.cmu.edu/dietrich/english/research/docuscope.html for more information). Figure 4, taken from Ishizaki and Kaufer (2011), illustrates their process:

Building the DS Dictionaries

According to David Kaufer, the creator of DocuScope dictionaries, words or phrases which share an ‘aboutness’ can be grouped together in a hierarchical model of what he describes as Language Action Types (LATs); when someone runs his DocuScope dictionary on any given corpus, the software will search for exact matches based on the classifications he has made and report statistical frequencies for each category. While the DocuScope dictionaries are quite specific – in many ways it represents the creators’ view of how language functions – any corpus sent through their dictionary will be analysed in the same way. It doesn’t matter if you send all of Charles Dickens’ novels or emails from your mother or all of Shakespeare’s plays through the DocuScope classification schema; the dictionary will check for the exact same features every time. (The joy of DocuScope, and any string-matching software like this, is that every text uses these terms in a slightly different distributional pattern).

In other words, Ubiqu+Ity matches text to entries in the dictionaries, then computes the percentages of words per document that fall into LAT categories. Essentially, Ubiqu+Ity parses text and then tells you what rules the language falls under according to rules outlined in DocuScope dictionaries. With Ubiqu+Ity, we offer several versions of Kaufer’s dictionaries as well as the ability to create your own rules. What if, for example, you were interested in the language of gender? While the DocuScope dictionaries cover a huge range of rhetorical and linguistic features, it does not have a category explicitly devoted to gender, though terminology related to gender can appear in a variety of existing LATs.

How to specify own rulesAs these instructions suggest, we would need to create our new dictionary with a Comma-Separated Values (CSV) sheet in Excel. To the uninitiated, a Comma-Separated Values is a spreadsheet, but it is a specific kind of spreadsheet format. Where Excel files end with the suffix “.xlsx” (akin to “.docx”, the Word equivalent), CSV files end with the suffix “.csv”. It looks like any other spreadsheet in Excel, but this is a non-proprietary format, which means your data will move comfortably across any software program and retain its structure. The example provided for you above is a tiny bit deceptive though: when you save a file as a csv file, it will include the commas as column delineator for you, so the file will look like the example provided above. If you include any special characters (spaces, punctuation, etc.) in your rules, Ubiqu+Ity will search for that exact match. The table below shows two ways of formatting a set of rules:

GOOD RULE FORMATTING

LESS GOOD RULE FORMATTING

 Screen Shot 2016-10-06 at 2.32.25 pm  Screen Shot 2016-10-06 at 2.33.53 pm

The one on the left is considered good rule formatting, because the computer will recognize it as

he, masculine
his, masculine
him, masculine
man, masculine
boy, masculine
she, feminine
her, feminine
hers, feminine
woman, feminine
girl, feminine

And the one on the right is considered less good, because to the computer this will read

he,, masculine
his,, masculine
him,, masculine
man,, masculine
boy,, masculine
she,, feminine
her,, feminine
hers,, feminine
woman,, feminine
girl,, feminine

(The one on the right may not necessarily be bad formatting outright, depending on what you’re interested in counting, but it definitely is less good formatting than the one on the left if you just want to count words and not words with punctuation!)

These lists can be as long or as short as you want, and they can be as specific or vague as you want: but whatever you tell Ubiqu+Ity to find, it will find. Once you upload your own dictionary, Ubiqu+Ity will use it to analyse your corpus, which is where the real fun starts. Here’s an example I ran using the VEP plain-text version of the Folger Digital Texts Shakespeare corpus (download it from here). You can download a CSV file reporting on the statistics of your user-defined rules, which will look like this:

User-defined CSV rules
click image to make bigger

This spreadsheet reports what percentage of each text uses each user-defined rule, just as it would with the DocuScope dictionaries. I’ve used the rules described above as a good example; the more categories you define, the larger your spreadsheet will be, of course.  From here, you can do the usual Excel things, like graph them to see what the difference between ‘masculine’ and ‘feminine’ words are available in the plays:

Screen Shot 2016-10-07 at 2.11.36 pm

Looking at this chart, I immediately want to know why Two Gentlemen of Verona has such a comparatively high volume of ‘feminine’ terms compared to other Shakespeare plays. But computers are also very good at identifying absence in ways that us humans cannot, so I am also interested in seeing why some plays like 1 Henry 6, Love’s Labours Lost, an a Midsummer Night’s Dream or have a smaller proportion of ‘masculine’ language overall – now I have specific research questions to tackle based on my initial findings.

XML Tags in TCP TEI-P4 Files

Do you work with the TEI P4 versions of TCP XML files and wonder what all those tags mean? After surveying XML tags in TCP corpora, I made a spreadsheet that lists all of the tags, defines them, and mentions where you may find said tags within the XML documents.

Download the spreadsheet from here.

The survey and examining the files has made obvious that different TCP corpora have different levels of curation. The EEBO-TCP corpus has the most fleshed out metadata information. Not all tags are used across all corpora. According to my survey, Evans-TCP doesn’t use <FILEDESC>tags like EEBO-TCP and ECCO-TCP. Also, EEBO-TCP has the following tags that ECCO-TCP and Evans-TCP doesn’t: <AB>, <DEL>, <FW>, and <SUBST>.

Knowing all of the tags and what they are used for has been important for VEP’s methods, since we’re writing a script that grants users flexibility for text extraction from TCP TEI P4 XML files. It uses a configuration file that indicates text to extract and ignore between XML tags. For example, if you want to extract plays without their stage directions, the configuration file will allow you to ignore the <STAGE> tags that contain them.

For the spreadsheet I made, I obtained TAG definitions from TEI: Text Encoding Initiative.

Forcing Standardization in VARD, Part 2

The final aspect of standardization I will discuss will be common early modern spellings forced to modern equivalents, decisions where the payoff of consistency outweighs slight data loss.

The VEP team decided to force bee > be, doe > do, and wee > we.

Naturally one can see the problems inherent to these forced standardizations.

Bee in early modern spelling can stand for the insect as well as the verb. Similarly with doe, it can signify a deer or a verb. For wee, it can be either an adjective or a pronoun. We hypothesized for our drama corpus that 1) bee would be overwhelmingly the verb; 2) doe would overwhelmingly be the verb; 3) wee would overwhelmingly be the pronoun.

The decision to force these words was supported by sampling frequency of meanings in the early modern drama corpus, along with frequencies from Anupam Basu’s EEBO-TCP Key Words in Context tool set to original spelling, offered by Early Modern Print: Text Mining Early Printed English.

My method to determine meaning frequency is as follows:

  1. I searched for the first 1,000 instances of a spelling in the early modern corpus and Key Words in Context.
  2. I generated CSVs of the 1,000 hits of the spellings in question, including surrounding text to gain context and determine the word’s signification
  3. When I located a word that deviated from the meaning VEP projected the spelling would be associated with, I highlighted the entry and took notes in a column beside the line
  4. After I read through 1,000 instances of the spelling, I tallied the number of times the word did not match our hypothesized meaning.

BEE > BE

CORPUS INSTANCES OF INSECT INSTANCES OF SPELLING PERCENTAGE OF ERROR
EM Drama 17 1,000 1.7%
Key Words 71 1,000 7.1%

Bee as bee is higher in the first 1,000 hits of Key Words for generic reasons. Key Words contains all of EEBO-TCP. There are early dictionaries (Thomas Elyot) and husbandry texts (John Fitzherbert). Morever, compilers like George Gascoigne recognized the metaphorical power of the bee’s work–travelling from flower to flower to make sweet honey–and used it as meta-commentary for their labor gathering the most delightful and edifying writing.

DOE > DO

CORPUS INSTANCES OF ANIMAL INSTANCES OF SPELLING PERCENTAGE OF ERROR
EM Drama 0 1,000 0%
Key Words 1 1,000 .1%

I further looked into variant spellings of the conjugation does in the drama corpus, to see how common the animal would be opposed to the verb. Searching for does in the corpus yielded one instance of the animal in the first 1,000 instances of the spelling (.1%). Searching doe’s in the corpus yielded 158 instances of the spelling, all which were the verb.

The above results suggest minimal data loss for standardizing all instances of doe to do in the drama corpus.

WEE > WE
It is harder to pin down figures for this decision.

Searching for wee in the early modern drama corpus, I identified 4 of the first 1,000 instances that were not the pronoun. One looked like it should have been well, another looked like an elision of God be with yee (God b’wee). The remaining two instances were French, which standardized are to be oui.

Based on the first 1,000 instances of wee in Key Words in Context, there was too much noise. It seems that text you search in Key Words in Context doesn’t preserve TCP notation for illegible characters, the bullet (•). There were many places I had to look at the original TCP files to determine the signification of wee because the pronoun we and the adjective wee didn’t make sense. When consulting the file, I matched wee to words with illegible characters (e.g, we•e).

What do these standardizations mean for the drama corpus?
If you work on bee and deer imagery in early modern drama, you will want to look somewhere else. For the bee example, if the 17 in 1,000 instances of the spelling bee as insect holds steady over the 6,694 instances of bee in the drama corpus, that means ~113 of those 6,694 spellings of bee refer to the insect. Overall, with an error rate of 1.7%, data loss in the corpus is minimal when the spelling bee is forced to be.

Granted, I looked at the first 1,000 instances of spellings in the corpus and in Key Words. Consequently I reviewed inconsistent portions of these corpora. The VEP team decided the sampling was telling for the context of the drama corpus. Another inconsistency with the files is the order in which they were searched between Key Words and the drama corpus. Key Words doesn’t provide the user with options for ordering the results, therefore the words are displayed in chronological order. For the drama corpus, files were searched from smallest to largest TCP file number. Overall, the frequency of significations suggest small margins of error for the standardizations of bee, doe, and wee within the corpus.

Forcing Standardization in VARD, Part 1

Optimizing VARD for the early modern drama corpus required “forcing” lexical changes to create higher levels of standardization in the dataset. Jonathan Hope gave me editorial principles to follow as we considered what words/patterns VARD should change that it wasn’t. We wanted to standardize prepositions, expand elisions, and preserve verb endings. Unfortunately, preserving Early Modern verb endings (-st, –th) would require an overhaul of VARD’s dictionary.

There were three routes I followed to force standardization: manually selecting variants over others to change confidence scores; marking non-variants as variants and inputting their standardized form; adding words to the dictionary.

For the early modern drama corpus, the VEP team identified two grammatical features for forced standardization. We decided to implement consistent spelling for pronouns, adverbs, and prepositions; and expanding elisions that would interfere with algorithmic analysis, like topic modeling. Granted, more could have been changed, but we erred on the side of caution to see how effective the changes would be overall.

After documenting forced changes, I will discuss their implications for the dataset, which will come in the next entry.

RULES TO FORCE ELISION EXPANSION (read more here)
t’ to_ Start
th’ the_ Start

PRONOUNS AND CONTRACTIONS
hee > he
hir > her
ide > I’d
ile > I’ll
i’le > I’ll
she’s > she’s
shees > she’s
* wee > we

ADVERB CONTRACTIONS
heeres > here’s
heere’s > here’s
theres > there’s
ther’s > there’s
wheres > where’s
where’s > where’s

ADVERBS/PREPOSITIONS
aboue > above
ne’er > never
ne’re > never
nev’r > never
o’er > over
oe’r > over
ope > open
op’n > open

WORDS ADDED TO DICTIONARY: Cupid, Damon, Leander, Mathias, nunc, Paul’s, Piso, qui, quod, tis, twas, twere, twould

MARKED AS VARIANTS FOR CORRECTION: greene > green, lockes > locks, vs > us, wilde > wild

* I will discuss the implications of our decision for wee in the next entry.

Tweaking VARD: Aggressive Rules for Early Modern English Morphemes and Elisions

Since I have discussed how VARD behaves with character encoding and symbols, I will devote space to explaining how I tweaked VARD to standardize Jonathan Hope’s early modern drama corpus.

Given the size of Hope’s corpus, it required automating the process of comparing VARD’s output to the original play files. Erin Winter wrote a case-sensitive python script that generated a CSV recording all of VARD’s changes and their frequencies. I compared the original words to VARD’s normalizations, looking at only the highest frequencies. I looked at unique spellings changed within the frequency range of approximately 46,000 to 100 times, which amounted to nearly 3,000 cases. (There were approximately 58,000 unique spellings in the corpus changed 10 or fewer times.) To offer a glimpse, here are the 10 most frequent VARD normalizations for the early modern drama corpus:

ORIGINAL NORMALIZED FREQUENCY
haue have 45680
selfe self 18473
Ile Isle 16095
loue love 15666
thinke think 10450
mee me 10437
vpon upon 10287
owne own 10205
vp up 9704
’tis it is 9691

The CSV tracking normalizations proved a painless way to identify where VARD needed a gentle push in another direction. Note Ile in the above table. Yes, England is an island (of which writers were aware), but 16,095 changes to Isle seemed suspect. When I looked at files with VARD-inserted XML tags, it became obvious those Iles should have been standardized to I’lls. There, VARD was simply wrong. (I will devote the next post to where VARD goofs–sometimes amusingly–in standardization.)

By researching questionable corrections, I was able to formulate standardization rules more “aggressive” than which the program instantiates with. (You can locate the default rules in the file “rules.txt,” in VARD’s “training” folder.) These rules dictate modern letter substitutions for common early modern letter combinations. Examples of the rules are as follows:

CHARACTERS CHANGE TO LOCATION IN WORD
vv w Anywhere
ie y Anywhere

Given the above rules, when VARD processes the word alvvaies, the program may suggest multiple variants: alwaies and alvvays. This contributes to competing spellings for variations across documents standardized, which you can find proliferate when VARD handles early modern prepositions and adverbs, even words with hyphens (e.g.: ne’er, ne’re, nev’r normalize differently; should the hyphen be eliminated or maintained?).

My additions to “rules.txt” aided not only spelling standardization but expanding elisions. The rules mainly gave VARD an extra push to handle early modern English morphemes. While “rules.txt” contains the rule ie at the end of words can be changed to y, it didn’t have a rule to help with standardizing the common adverb ending lie. Here is a table of the rules I added:

CHARACTERS CHANGE TO LOCATION IN WORD
cyon tion End
lie ly End
shyp ship End
t’ *to_ Start
th’ *the_ Start
tiue tive End
vn un Start
vs us Anywhere
ynge ing End

While not comprehensive, the rules definitely aided VARD’s efforts. Of course, entering rules is only one step of the process. For the rules you add to the dictionary, you must manually train VARD to implement them.

* A final word regarding the entries I made to expand the elisions t’ and th’ when they begin words. I typed an underscore (_) to reflect that there is a space after to and the in the rules. VARD will recognize spaces for rule input. In the GUI the rule will be displayed with an underscore; you do not not type the underscores in. The rules worked, and the program properly expanded words after some manual training. It changed th’ambassador to the ambassador, t’change to to change.

VARD & ASCII Symbols

Yes, even ASCII symbols mess up VARD.

Those who have tried to extract plain text from TCP TEI P4 or P5 XML files know how difficult it is. While coding tools to extract TCP text, the VEP team grappled with the order of operations to perform. Where is the best place in an extraction pipeline to convert the XML document to text? Where do we want to use VARD?

As discussed in my previous post, processing XML files through VARD can be tricky. Non-ASCII symbols and XML tags interrupt the words that VARD needs to check against its dictionary, preventing VARD from recognizing words in their entirety.

For the most part, VARD cannot process even ASCII symbols as part of words, which has implications for extracting and representing TCP XML files. In order to process TCP XML, the VEP team has had construct its character cleaner and text extractor to work with VARD’s constraints regarding symbols and XML tags. Furthermore, character cleaning and text extraction had to align with editorial principles. To illustrate, the team had to consider the extent to which its algorithms modified TCP text. TCP XML file structure and contents further complicated the modification. When extracting text, did we only want to extract what was definite (the characters) or also preserve the traces of illegibility (characters represented by symbols)?

In the end, VEP decided to design character cleaning and text extraction tools that preserve textual information. It required figuring out character substitutions that worked with VARD to account for symbols nested within words. If a word contained illegible characters, the number of illegible characters would be maintained. However, the TCP’s bullet point that represents illegible characters doesn’t allow VARD to read the surrounding characters as one word.

VARD2To address the dilemma, I generated a test text file with a word that had symbols interrupting it, quite like you will find in TCP corpora. I recreated the test for the post today, using the word unworthinesse. I wanted to see which ASCII symbols VARD would treat as part of words. As you can see in the screen capture to the left of VARD’s GUI, VARD successfully treats several ASCII characters as part of words–the entire word is highlighted. For the symbols not treated as part of the word,VARD doesn’t highlight them. Unsurprisingly VARD treats hyphens (-) as part of a word. Hyphens are a common feature of compound adjectives. Other ASCII symbols VARD recognizes are the tilde (~), the caret (^), and the equals sign (=).

When designing the character cleaner for TCP corpora, the VEP team leveraged the knowledge of how VARD handles ASCII symbols in the following way:

  1. Illegible characters(bullet: •) replaced by the caret (^). TCP: we•e | VEP: we^re
  2. Unrecognized punctuation (small black square: ▪) replaced by the asterisk (*). TCP: long ago▪ | VEP: long ago*
  3. Unrecognized characters and common textual symbols (e.g., the pilcrow (¶)) replaced by at sign (@). TCP: ¶Behold, | VEP unstripped text: @Behold, | VEP stripped text: Behold,
  4. Missing words (lozenge in angle brackets: 〈◊〉) replaced by ellipses in parentheses ((…)).

With the above scheme we preserve as much textual information as possible. With caret replacements, VARD has the opportunity to standardize words that have illegible characters.

Future versions of our character cleaner may take advantage of the tilde (~) to help represent letters with macrons (ā to a~).

Our character cleaner also removes certain XML tags to give the flexibility of using VARD on TCP files in text or XML format.

  1. <SEG> tags of decorative initials — <SEG REND=”decorInit”>T</SEG>he
  2. Superscript — 13<sup>th</sup>
  3. Subscript — X<sub>2</sub>

  4. XML comments — <!– handkeyed by person –!>

Of course, a final caution for VARDing XML files: make sure the program processes only the text that you want it to. VARD automatically ignores XML tags. It’s still going to alter what is between those tags, especially in the HEADER of the XML file, which contains the metadata. To make sure VARD doesn’t change the metadata, add the following entries to VARD’s “text_to_ignore.txt” file in the setup folder (it contains the code for ignoring XML tags):

  1. (?s)<HEADER>.*</HEADER>
  2. (?s)<header>.*</header>
  3. (?s)<teiHEADER>.*</teiHEADER>
  4. (?s)<TEIHEADER>.*</TEIHEADER>
  5. (?s)<TEMPHEAD>.*</TEMPHEAD>

Why are there so many? Because coding practices are incredibly variable.