Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing


  • Scott A. Crossley
  • Franklin Bradfield
  • Analynn Bustamante



assessment, automatic writing evaluation, grammar, mechanics, natural language processing, writing quality


This study introduces GAMET, which was developed to help writing researchers examine the types and percentages of structural and mechanical errors in texts. GAMET is a desktop application that expands LanguageTool v3.2 through a user-friendly, graphic user interface that affords the automatic assessment of writing samples for structural and mechanical errors. GAMET is freely available, works on a variety of operating systems, affords document batch processing, and groups errors into a number of structural and mechanical error categories. This study also tests LanguageTool’s validity using hand-coded assessment for accuracy and meaningfulness on first language (L1) and second language (L2) writing corpora. The study also examines how well LanguageTool replicates human coding of structural and mechanical errors in an L1 corpus as well as assesses associations between GAMET and human ratings of essay quality. Results indicate that LanguageTool can be used to successful locate errors within text. However, while the accuracy of LanguageTool is high, the recall of errors is low, especially in terms of punctuation errors. Nevertheless, the errors coded by LanguageTool show significant correlations with human ratings of writing and grammar and mechanics errors. Overall, the results indicate that while LanguageTool fails to flag a number of errors, the errors flagged provide an accurate profile of the structural and mechanical errors made by writers.



How to Cite

Crossley, S. A., Bradfield, F., & Bustamante, A. (2019). Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing. Journal of Writing Research, 11(2), 251–270.