Articles on: A Deeper Dive

I'm new to rubrics – Can you tell me more?

I’m new to rubrics — can you tell me more?


Short answer: A rubric is a scoring guide. It spells out what “good” looks like for an assignment by listing the criteria you’ll grade (e.g., Focus, Evidence, Organization, Style), the performance levels (e.g., 1–4 or 1–5), and the descriptors that define each level. Rubrics make grading faster, clearer, and fairer—for both teachers and students.


Why rubrics matter

  • Clarity for students: They see the target before they start, not after they submit.
  • Consistency for teachers: Scores are anchored to agreed descriptors, not mood or guesswork.
  • Actionable feedback: Each criterion explains what to improve next time.
  • Team alignment: Departments and PLCs can calibrate expectations across courses and grade bands.


Why the AI needs a rubric (how EssayGrader uses it)

Our AI isn’t “guessing.” It evaluates the student writing against your rubric:

  • Criteria mapping: The model looks for evidence that corresponds to each criterion (e.g., claim quality, coherence, textual support).
  • Descriptor matching: It selects the best-fit level using your exact wording (that’s why detailed descriptors are important).
  • Scale & weights: It applies your point scale (whole or half points) and your criterion weights to compute the final score.
  • Comments: It generates criterion-specific feedback aligned to the level chosen, so comments reflect your standards, not generic tips.


Tip: If you duplicate a Platform Rubric, keep the descriptors rich and specific—AI performance improves when the rubric is explicit.


Anatomy of a good rubric

  • Clear criteria: Measure distinct skills (e.g., don’t combine “grammar” and “evidence” in one line).
  • Ordered levels: 4- or 5-point scales are most common (we support others too).
  • Rich descriptors: Use evidence-based language (“integrates relevant, well-cited sources”) instead of vague terms (“good evidence”).
  • Weights that reflect priority: If Argument quality matters most, give it more weight than Conventions.


Common rubric “types” you’ll see in our library

These mirror real-world standards and the rubrics teachers use most:


  • Informative / Explanatory: Explains a topic clearly and accurately; emphasizes structure, development, and clarity of ideas.
  • Argumentative / Opinion: Makes a claim, supports it with reasons and evidence, addresses counterclaims. (Elementary often says “Opinion.”)
  • Narrative: Tells a story with purpose; looks at plot, setting, character, pacing, and technique.
  • Expository (state-specific term): Often overlaps with Informative/Explanatory in some programs (e.g., Florida B.E.S.T.).
  • Descriptive (samples): Focuses on sensory detail and precision of language.
  • Reading Response / From Sources: Analyzes or synthesizes information from provided texts (e.g., NY Regents Part 2/3).
  • Language / Conventions: Grammar, usage, mechanics, and control of language; sometimes its own criterion or rubric (e.g., Utah RISE Conventions).
  • Content-Area Writing: Argument or explanation in non-ELA subjects (mirrors CCSS content-area rubrics).
  • Program-specific tasks:
  • AP: DBQ (document-based), LEQ (long essay), IWA (Seminar), AP Research Paper—each with program rubrics.
  • IB MYP: Criterion-based across subjects (Arts, Design, Sciences, Language & Literature, etc.), with Year/Phase progressions.
  • Alternate assessments (e.g., MSAA): Level-based rubrics tailored to accessibility needs and alternate standards.


Why there are different standards (states, consortia, and programs)

Education in North America uses multiple frameworks. That’s why our Platform Rubrics are grouped by State or Curriculum:


  • State programs (e.g., CAASPP, STAAR, MCAP, ILEARN, ISASP): States adopt specific standards, grade bands, and task types.
  • SBAC states: Several states use SBAC rubrics under local branding (e.g., “OSAS” in Oregon). Titles differ; the skill expectations are closely aligned.
  • Common Core (CCSS): Grade-band rubrics (e.g., 6–8, 9–10, 11–12) for Argument, Informative, Narrative, Reading Response, Language Skills, including content-area variants.
  • AP/College Board & IB: National/international programs with their own task types and criteria sets.
  • Alternate assessments (MSAA): Standards designed for students with the most significant cognitive disabilities—scales and descriptors reflect those aims.


Bottom line: Pick the rubric that matches your task (argument, narrative, etc.) and your context (state test, AP course, IB year/phase, department standard).


Choosing the right rubric (quick guide)

  • Start with the task: Argument? Informative? Narrative? From sources?
  • Match the context: Use your state or program’s rubric when you’re prepping for those assessments.
  • Check the grade band: Choose the band that fits your students (e.g., CCSS 9–10 vs 11–12).
  • Adjust weights: Emphasize what you value (e.g., more weight on Evidence for research tasks).
  • Duplicate and customize: Tweak descriptors to include assignment-specific expectations and examples.


Where to find and customize rubrics in EssayGrader

  • Go to Rubrics → Create Rubric.
  • Choose Use a template and open the Platform section.
  • Preview a rubric. Click Use to run it as-is, or Duplicate to edit the scale, weights, criterion names, and descriptors.


FAQ


Can I mix criteria from different rubrics?

Yes—duplicate one rubric as your base, then add or replace criteria from others.


Do I have to match my state exactly?

Whether you are required to use a specific rubric for assessment may vary from state to state or country/jurisdiction. Using a state-specific rubric can help for test prep, in particular. But many teachers start with a state rubric, then customize to their course or assignment.


What scale should I use?

4- or 5-point is common. We support half points and other scales—pick what matches your assessment needs.


Will AI grading still work if I customize?

Yes. The AI reads your custom descriptors and scores accordingly—in fact, that’s exactly the point of using a rubric!


What about equity and bias?

Transparent, descriptor-based criteria reduce inconsistency. Calibrate with colleagues and keep exemplars on hand.


Next step

Open the Platform Rubrics tab in the Rubrics Library, pick the rubric that matches your task type and program/context, and use it or duplicate it to make it your own.


Updated on: 09/19/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!