Google Classroom handles short assignments well, but treats long projects the same way: one deadline, one submission, one grade.

| The Gap: Students often put the work off until late, and teachers only get visibility once the deadline is close or the work is already done.
Before research, we framed a hypothesis instead of a generic “better grading” ask:
If Classroom let instructors decompose a complex assignment into micro‑steps, each with its own deadline, instructions, and rubric‑aligned AI grading feedback, then:
Students would get earlier, more actionable feedback, reducing last‑minute panic.
Instructors would shift effort from repetitive checks to higher‑value feedback.
| This framing intentionally constrained the solution:
We were not trying to “add AI” for the sake of it, but to validate whether scaffolding plus AI grading could change behavior and outcomes without breaking Classroom’s existing simplicity.

Interviews with instructors highlighted that their real bottleneck is not the number of assignments but the nature of feedback.
"I want to see work before it's too late to change anything. By the time I'm grading, the semester is basically over."
They repeatedly correct the same low‑stakes issues while getting too little time to diagnose conceptual gaps. They also only see work when it is fully formed, which turns grading into an autopsy rather than a diagnostic check‑up.
Students described “big” assignments not as hard but as vague.
"I know the paper is due in three weeks but I don't know where to start, so I just... don't start."
Students struggle to start, pace their work, and know if they’re on track until it’s too late. What looks like procrastination is often a planning and feedback problem: the way the work unfolds doesn’t match how Classroom represents it.
One key insight reframed our opportunity:
Both sides wanted the same thing for different reasons, smaller, visible steps earlier in the process. For instructors, steps reduce unseen risk; for students, steps reduce unbounded anxiety.
I used MoSCoW Prioritisation to keep the MVP focused on one thing: does breaking work into steps with AI grading each one actually change how students work and when teachers step in?

What got cut & why
| Peer Review Loops. Students in interviews wanted to see each other's work at checkpoints, and pedagogically it's well-supported. But it doubles the interaction surface for every micro-assignment.
| Auto-generated micro-steps. Instructors told us the decomposition is the pedagogical thinking, automating it would skip the step where they decide what matters.
| For instructors,
core needs became:
Decomposition
I can express how I mentally break down complex work so the
tool reflects how I already teach
Targeted Feedback
I can respond to specific components, not just the final artifact,
so I can correct earlier and more surgically.
Rubric-Based AI Grading Feedback
AI should behave like an extension of my standards, not an
opaque second opinion.
| For students,
core needs became:
Manageable Steps
The system shows me what to do now, not just what is due in
three weeks.
Predictive feedback
I know whether I’m on track before the final deadline, so I can course‑correct without guessing.
This framing pushed us to design a model where “micro‑assignment” is a first‑class object with its own lifecycle (create → do → grade → reflect), instead of a subdivision of existing instructions.
I designed two views around how each group thinks.
Instructor View
Breaking down an assignment
The flow guides instructors to break one assignment into chunks, each with its own instructions, deadline, & grading criteria. It matches how teachers already plan the work: outline, draft, citations, final.
| 6-step cap: Micro-assignments are capped at 6 to prevent scaffolds from turning into mini-syllabi. It’s a design guardrail, not a technical limit, and can be revisited for assignment types that genuinely need more.
Track progress before the final deadline
A dashboard that helps instructors track progress across each step of a larger assignment and see where students need support.

Criteria-specific AI feedback:
AI feedback is tied to the exact rubric row for that step, making it narrower but easier for instructors to audit and trust.

Student view
One task at a time
Students see one micro-assignment at a time, turning a large assignment into smaller, self-contained tasks that keep attention on what to do next.

Feedback before the next step
Each micro-assignment closes its own loop, students see the criteria, submit, and get feedback before the next step begins.

Design Decision: Visible, sequential submission not locked.
Students can read the full checklist from the start, but must submit in order so feedback on one step can shape the next.
| Why it matters: If students read ahead and start earlier, that’s an early sign the visibility itself is improving planning behavior.
Across both views, I kept to Google Classroom’s existing patterns for typography, hierarchy, and actions so the feature feels like an evolution of the product, not a parallel tool that instructors must mentally context‑switch into.
Google Workspace for Education uses tiered plans, and this feature fits naturally in the paid ones. It gives schools a clear reason to upgrade: less instructor effort, better completion, and a stronger response to platforms like Canvas and Blackboard that already support modular assignments.

The Gemini integration creates a high-frequency, measurable surface for Google's AI inside a product used by 250M+ users, making the feature commercially important beyond EdTech alone.
Beyond the core flows, I defined acceptance criteria for the key stories and included accessibility requirements so the concept stayed realistic to build and use. I then mapped the roadmap in three phases.
Phase 1- Browser extension (current status- failed miserably with google auth)
The browser extension MVP includes the minimum needed to test whether it changes student
behavior inside Google Classroom.
Phase 2- Expansion
Later versions can add richer question types and class-level step analytics, once the core
extension loop is validated.
Phase 3- Motivation and visibility
The extension can add progress tracking, completion visuals, and light streak mechanics, once
we know what actually changes behavior, so gamification amplifies signal instead of adding noise.
Start with a Wizard-of-Oz trial
If I revisited this project, I would prioritize a lightweight prototype or Wizard‑of‑Oz trial inside a single institution before investing in a full engineering track.
Expose AI confidence to instructors, not just grades
AI grading feedback shift from scoring everything to triaging: high-confidence submissions auto-pass, low-confidence/outliers flag for human review, shrinking teacher inboxes from 30 to 6 while building trust.
A question I most want data on
Does scaffolding actually flatten the spike of last‑minute submissions, or does it just create many small spikes?
| Interview data pointed to the former, but log‑level data on submission timing, revisions, and final outcomes would make the investment case much stronger.
The best tools in education don't do more. They surface the right thing at the right moment.
Classroom's scale makes that a genuinely hard design problem. This was one attempt at a structural answer, not a solution to how students learn, but a change to when feedback meets them.
