Micro-Assignments
with AI Grading Feedback

Micro-Assignments
with AI Grading Feedback

Micro-Assignments
with AI Grading Feedback

Micro-Assignments
with AI Grading Feedback

Micro-Assignments
with AI Grading Feedback

Micro-Assignments
with AI Grading Feedback

NATURE

NATURE

Product Feature Concept (End-to-End AI Design)

Product Feature Concept (End-to-End AI Design)

TEAM

TEAM

1 User Researcher
1 Product Designer (Me!)

1 User Researcher
1 Product Designer (Me!)

TOOL

TOOL

Lovable
Replit
Claude

Lovable
Replit
Claude

Lovable, Replit, Claude

TIMELINE

TIMELINE

3 Days

3 Days

TL;DR

TL;DR

Problem: Google Classroom treats a 10-week research paper the same as a one-question quiz, one deadline, one submission, zero checkpoints in between.

| Why: Google Classroom treats all assignments as same, one deadline, one submission, zero visibility in between. What looks like student procrastination is actually a planning and feedback problem.

Solution: A 0→1 feature concept for Google Classroom that breaks complex assignments into scaffolded milestones with rubric-based AI grading feedback.

| What: A 0→1 feature concept that breaks complex assignments into steps with AI grading at each one, so students know if they're on track before it's too late, and teachers see work while it can still be changed.

Business Outcome: Strengthens Google's case for institutional upgrades from free to paid Workspace tiers by making Classroom the only LMS with native AI-graded scaffolding.

| Outcome: Shipped a live prototype and pitch deck. The feature earns its place in Google's paid Workspace tiers by reducing instructor effort without changing how Classroom already works.

My Role: Led product strategy and design, and contributed significantly to product management, from hypothesis framing, prioritization & roadmapping.

| My Role: Owned product design, hypothesis framing, prioritisation, and roadmapping.

CONTEXT

CONTEXT

Google Classroom is optimized for submission management, not learning progress.

Google Classroom is optimized for submission management, not learning progress.

Google Classroom is optimized for submission management, not learning progress.

Google Classroom handles short assignments well, but treats long projects the same way: one deadline, one submission, one grade.

| The Gap: Students often put the work off until late, and teachers only get visibility once the deadline is close or the work is already done.

HYPOTHESIS

HYPOTHESIS

Breaking work into checkpoints would create better moments for feedback than one final submission.

Breaking work into checkpoints would create better moments for feedback than one final submission.

Breaking work into checkpoints would create better moments for feedback than one final submission.

Before research, we framed a hypothesis instead of a generic “better grading” ask:

If Classroom let instructors decompose a complex assignment into micro‑steps, each with its own deadline, instructions, and rubric‑aligned AI grading feedback, then:

  • Students would get earlier, more actionable feedback, reducing last‑minute panic.

  • Instructors would shift effort from repetitive checks to higher‑value feedback.

| This framing intentionally constrained the solution:
We were not trying to “add AI” for the sake of it, but to validate whether scaffolding plus AI grading could change behavior and outcomes without breaking Classroom’s existing simplicity.

RESEARCH REFRAMED THE HYPOTHESIS

RESEARCH REFRAMED THE HYPOTHESIS

Interviews with instructors highlighted that their real bottleneck is not the number of assignments but the nature of feedback.

"I want to see work before it's too late to change anything. By the time I'm grading, the semester is basically over."

They repeatedly correct the same low‑stakes issues while getting too little time to diagnose conceptual gaps. They also only see work when it is fully formed, which turns grading into an autopsy rather than a diagnostic check‑up.

Students described “big” assignments not as hard but as vague.

"I know the paper is due in three weeks but I don't know where to start, so I just... don't start."

Students struggle to start, pace their work, and know if they’re on track until it’s too late. What looks like procrastination is often a planning and feedback problem: the way the work unfolds doesn’t match how Classroom represents it.

One key insight reframed our opportunity:
Both sides wanted the same thing for different reasons, smaller, visible steps earlier in the process. For instructors, steps reduce unseen risk; for students, steps reduce unbounded anxiety.

FROM RESEARCH TO PRODUCT

FROM RESEARCH TO PRODUCT

Ten user stories, but one underlying question: what behaviour are we changing?

Ten user stories, but one underlying question: what behaviour are we changing?

Ten user stories, but one underlying question: what behaviour are we changing?

I used MoSCoW Prioritisation to keep the MVP focused on one thing: does breaking work into steps with AI grading each one actually change how students work and when teachers step in?

What got cut & why
| Peer Review Loops. Students in interviews wanted to see each other's work at checkpoints, and pedagogically it's well-supported. But it doubles the interaction surface for every micro-assignment.

| Auto-generated micro-steps. Instructors told us the decomposition is the pedagogical thinking, automating it would skip the step where they decide what matters.

| For instructors,

core needs became:

Decomposition

I can express how I mentally break down complex work so the
tool reflects how I already teach

Targeted Feedback

I can respond to specific components, not just the final artifact,
so I can correct earlier and more surgically.

Rubric-Based AI Grading Feedback

AI should behave like an extension of my standards, not an
opaque second opinion.

| For students,

core needs became:

Manageable Steps

The system shows me what to do now, not just what is due in
three weeks.

Predictive feedback

I know whether I’m on track before the final deadline, so I can course‑correct without guessing.

This framing pushed us to design a model where “micro‑assignment” is a first‑class object with its own lifecycle (create → do → grade → reflect), instead of a subdivision of existing instructions.

MICR0-ASSIGNMENTS

MICR0-ASSIGNMENTS

The same feature had to feel structural to instructors and sequential to students.

The same feature had to feel structural to instructors and sequential to students.

The same feature had to feel structural to instructors and sequential to students.

I designed two views around how each group thinks.

Instructor View

Breaking down an assignment
The flow guides instructors to break one assignment into chunks, each with its own instructions, deadline, & grading criteria. It matches how teachers already plan the work: outline, draft, citations, final.

| 6-step cap: Micro-assignments are capped at 6 to prevent scaffolds from turning into mini-syllabi. It’s a design guardrail, not a technical limit, and can be revisited for assignment types that genuinely need more.

Track progress before the final deadline
A dashboard that helps instructors track progress across each step of a larger assignment and see where students need support.

Criteria-specific AI feedback:
AI feedback is tied to the exact rubric row for that step, making it narrower but easier for instructors to audit and trust.

Student view

One task at a time
Students see one micro-assignment at a time, turning a large assignment into smaller, self-contained tasks that keep attention on what to do next.

Feedback before the next step
Each micro-assignment closes its own loop, students see the criteria, submit, and get feedback before the next step begins.

Design Decision: Visible, sequential submission not locked.
Students can read the full checklist from the start, but must submit in order so feedback on one step can shape the next.

| Why it matters: If students read ahead and start earlier, that’s an early sign the visibility itself is improving planning behavior.

Across both views, I kept to Google Classroom’s existing patterns for typography, hierarchy, and actions so the feature feels like an evolution of the product, not a parallel tool that instructors must mentally context‑switch into.

BUSINESS FIT

BUSINESS FIT

It earns its place in paid tiers by scaling instructor intent, not instructor time.

It earns its place in paid tiers by scaling instructor intent, not instructor time.

It earns its place in paid tiers by scaling instructor intent, not instructor time.

Google Workspace for Education uses tiered plans, and this feature fits naturally in the paid ones. It gives schools a clear reason to upgrade: less instructor effort, better completion, and a stronger response to platforms like Canvas and Blackboard that already support modular assignments.

The Gemini integration creates a high-frequency, measurable surface for Google's AI inside a product used by 250M+ users, making the feature commercially important beyond EdTech alone.

ROADMAP

ROADMAP

Three phases: validate the loop, expand the surface, then add motivation.

Three phases: validate the loop, expand the surface, then add motivation.

Three phases: validate the loop, expand the surface, then add motivation.

Beyond the core flows, I defined acceptance criteria for the key stories and included accessibility requirements so the concept stayed realistic to build and use. I then mapped the roadmap in three phases.

Phase 1- Browser extension (current status- failed miserably with google auth)

The browser extension MVP includes the minimum needed to test whether it changes student
behavior inside Google Classroom.

Phase 2- Expansion

Later versions can add richer question types and class-level step analytics, once the core
extension loop is validated.

Phase 3- Motivation and visibility

The extension can add progress tracking, completion visuals, and light streak mechanics, once
we know what actually changes behavior, so gamification amplifies signal instead of adding noise.

REFLECTION

REFLECTION

What I'd do differently.

What I'd do differently.

What I'd do differently.

Start with a Wizard-of-Oz trial
If I revisited this project, I would prioritize a lightweight prototype or Wizard‑of‑Oz trial inside a single institution before investing in a full engineering track.

Expose AI confidence to instructors, not just grades
AI grading feedback shift from scoring everything to triaging: high-confidence submissions auto-pass, low-confidence/outliers flag for human review, shrinking teacher inboxes from 30 to 6 while building trust.

What I'd want to know first.

What I'd want to know first.

A question I most want data on
Does scaffolding actually flatten the spike of last‑minute submissions, or does it just create many small spikes?

| Interview data pointed to the former, but log‑level data on submission timing, revisions, and final outcomes would make the investment case much stronger.

What I learnt.

What I learnt.

The best tools in education don't do more. They surface the right thing at the right moment.
Classroom's scale makes that a genuinely hard design problem. This was one attempt at a structural answer, not a solution to how students learn, but a change to when feedback meets them.