I'll be using a set of algorithms to split names in various ways, hopefully into separate English words, as my example above would do quite nicely. Then I'll have a configurable set of standard non- or semi-English tokens that can be checked for deviation from their definitions, and good old ignore lists.
I would check for consistency first -- if a token is repeated many times, then it is probably what you meant and a token with a small Levenshtein Distance from it is probably misspelled.
On one project I decided that (by golly) I was going to spell "threshold" with three Hs (threshhold), as it's pronounced! I later changed it back, but before doing so I'd want the dozen "threshhold"s to outweigh an outlying "threshold".
Likewise, an incorrectly named item could be a valid word.