Ahead of the Curve: How PEG™ Has Led Automated Scoring for Years

, , , ,

What is PEG™?

PEG, or Project Essay Grade, is the automated scoring system at the core of ERB Writing Practice.  It was invented in the 1960s by Ellis Batten Page, a former high school English teacher, who spent “many long weekends sifting through stacks of papers wishing for some help.” His guiding principles? 1) the more we write, the better writer we become, and 2) computers can grade as reliably as their human counterparts (Page, 2003).  The state of computers at the time of Page’s invention did not leave much room for automation, so PEG lay dormant until the mid-1980s.  Given that Page’s two principles are still as relevant today as they were then, PEG was given new life in the 1990s scoring essays for NAEP, Praxis, and GRE testing programs when computerization became feasible.  PEG was eventually acquired by ERB’s longtime partner, Measurement Inc., and continues to evolve and find new uses today.

The foundational concept of automated scoring is that good writing can be predicted.  PEG and other systems require training essays that have human scores, and these systems use such essays to create scoring (or prediction) models.  The models typically include 30-40 features, or variables, within a set of essays that predict human ratings.  Typical examples of such variables include sentence length, use of higher-level vocabulary, and grammar.  In most instances, the combination of these variables yields correlations with human raters in the mid .80s on a scale of 0-1, which is a high level of prediction accuracy—one that is typically higher than correlations between different human raters and themselves.  Once the model is trained, the automated scoring system “reads” subsequent essays, quantifies values for them on each variable in the model, and uses the prediction model to score the essay. 

Despite the proven accuracy of automated scoring systems, a common criticism is that the scores such systems produce lack an understanding of the meaning of a student-written essay.  Humans can rate the quality of an idea or the strength of an argument in ways that computers cannot, even if such ratings can be idiosyncratic and inconsistent at times.  While that criticism is valid, the 30-40 variables used by PEG represent the traits and skills of good writing, and thus are extremely relevant to budding writers who need feedback to learn how to improve their writing as they practice.  To balance out the automated PEG feedback, ERB Writing Practice also includes options for users to collect feedback from peers and/or teachers.  Teachers can give quick, quantitative ratings on how effectively students used textual evidence as well as how accurate the content of their writing is in relation to a given prompt topic. 

When PEG was first used operationally, its focus was on predicting scores holistically; that is, recovering the overall writing score a human assigned the essay.  Over time, scoring evolved to provide feedback on unique traits of effective writing, and different scoring algorithms were developed for distinct genres.  Today, PEG provides scores on six characteristics of writing and uses separate models for three genres: argumentative, informational/explanatory, and narrative. The six characteristics of effective writing that PEG provides scores on are outlined below (learn more at support.wpponline.com).

  1. Development of Ideas — The writer’s presentation of supportive details and information pertinent to support their idea.
  2. Organization — The writer’s overall plan (coherence) and internal weaving together of ideas (cohesion).
  3. Style — The use of strong word choices and varied sentence constructions to establish a unique voice that connects with the audience.
  4. Word choice — The appropriate use of advanced vocabulary, precision, and application of vocabulary to an essay.
  5. Sentence fluency — The use of complex and varied sentences to skillfully create a smooth flow of ideas.
  6. Conventions — Conventions include grammar, usage, pronoun reference, consistency in number and person, and mechanics (spelling, capitalization, punctuation, and paragraphing).

The strong reliability of PEG scoring for different genres has also enabled teachers to introduce their prompts for automated essay scoring.  When teachers do so, they can select the PEG model that aligns with the genre of their writing prompt, ensuring more nuanced automated scoring. 

Since the advent of PEG, other automated essay scoring systems have been launched, and research has been conducted.  In a recent study conducted by the National Center of Education Statistics, PEG was shown to be the most accurate among automated scoring alternatives at scoring prompts developed for The Nation’s Report Card (NCES, 2022).  Research has also focused on the efficacy of writing practice with PEG scoring.  An important study found that after “controlling for students’ initial writing quality and the amount they used PEG writing, students who used PEG produced higher quality essays at the end of the intervention … 22% higher than those who didn’t” (Palermo, 2018).

So what does this all mean for ERB members?

Our purpose at ERB is to provide member schools with scientifically developed measures they can use to understand gaps in curriculum and instruction, as well as specific areas where students can improve.  ERB Writing Practice is a new program that provides students and educators with a steady stream of reliable data they can use to target improvements to individual writing.  It addresses the enormous time commitment for teachers of grading papers by hand.  ERB Writing Practice also has the evidence to support its efficacy to improve their writing. These benefits open many opportunities for students to write more, and in doing so, become better writers.  

  

References

Page, E. B. (2003).  Project Essay Grade: PEG.  In Automated Scoring: A Cross-Disciplinary Perspective (edited by M. Shermis and J. Burstein). Mahwah, NJ: Erlbaum.

Palermo, C. (2018).  Research student finds using PEG writing helps students write higher quality essays.  Retrieved August 12, 2022 from: https://measurementinc.com/news/research-study-finds-using-peg-writing-helps-students-write-higher-quality-essays.

NCES.  (January 21, 2022).  Four Teams Win Top Prize in Automated Scoring Challenge for The Nation’s Report Card.  Retrieved August 12, 2022 from: https://nces.ed.gov/whatsnew/press_releases/1_21_2022.asp


Contact your Member Services Director or Submit a request form if you have questions about ERB Writing Practice.

Related Reading

The Power of Evidence-Based Problem-Solving to Improve Student Outcomes

Dave Hersh, CEO of Character Lab, recommends educators use five steps to collect evidence for problem-solving and ultimately improving student outcomes. […] read more

Navigating the Middle East Conflict and Assessing Mission Success

As with any challenge to the value and integrity of schools, this is an opportunity to reaffirm our core commitments as educators. Teaching about the Middle East is part of the educational mission of schools […] read more

Spring Cleaning: Taking the Time to Reflect on the School Year

Spring is an ideal part of the year to sequester some good thinking time in order to reflect on what has worked well during the past year, along with what could have gone better. Be unsparing. Take notes. Then set it aside for at least the first half of the summer. […] read more

Making Sense of Your Assessment Data: How ERB Supports Schools

In addition to assessments and measures of student well-being and social-emotional skills, we offer a range of resources and services to help educators and families extract actionable insights from assessment data. […] read more

Are you an ERB member?

Update your email preferences to receive news and updates from ERB.

Are you an ERB Member?

Update your email preferences to receive news and updates from ERB.

Not an ERB member? Join our global community today!