We check what matters. You fix what's broken and improve your practice.
I built The Crit as a complete design education platform—designing the UX, implementing multi-agent AI systems, building specialized critique agents, and creating 50+ resource pages. The challenge wasn't just generating AI feedback; it was distilling 15 years of teaching critique into an 847-word prompt that delivers honest, actionable feedback designers actually need. This case study demonstrates how design education expertise, multi-agent AI architecture, and content strategy compound to create a platform that helps designers improve.
👤 Role: Design Professor, Product Designer, Full-Stack Engineer, AI Systems Architect
⚡ Stack: Next.js 14, TypeScript, Supabase, Claude AI, Anthropic SDK, Vercel, Tailwind CSS, Framer Motion, Formidable
📅 Timeline: Multiple iterations → 3 full rebuilds → Live production platform
🎯 Outcome: Multi-agent AI system, 2-3 min critique time, 50+ resource pages, 752k+ monthly searches, 100% visual analysis
🔗 Live Product: thecrit.co • Resources
I'm a full-time design professor. I teach the exact people I'm building for every single day. I kept seeing the same pattern: designers posting portfolios asking for feedback, getting "looks great!" responses, then wondering why they're not getting interviews.
💬 "Everyone says my portfolio looks good, but I've applied to 50 jobs and got 2 interviews. What am I missing?"
— Real designer feedback I kept seeing on Reddit
The post-pandemic shift: Design programs moved online permanently. Class sizes ballooned. One-on-one critique turned into breakout rooms. Studio culture vanished. Meanwhile, AI tools are making everyone a "designer"—but fewer people know how to design well. The tools got democratized, but the education didn't.
What I saw again and again:
The paradox: Right when designers need the most support to level up and differentiate themselves from AI-generated mediocrity, they have the least access to quality feedback. AI democratized the tools, but it actually made expertise more valuable, not less.
The insight: Designers don't need another "portfolio review service" that charges $200 and takes a week. They need instant, honest feedback from someone who's been there—someone who can see what they can't see anymore and tell them exactly what to fix. They need critique that teaches, not just critiques.
Before writing a single line of code, I mapped out what designers actually need from critique. After 15 years teaching, I know what helps designers improve—and it's not what most AI tools are delivering.
This isn't random—it's based on how I actually run critiques in class. Every critique follows this structure:
🧠 Key Insight
The one thing that will unlock their next level. Not obvious feedback, but the deeper pattern they're missing.
👀 Overview
What's working (build confidence first), what's not working (honest but constructive), how it fits their stated goals.
📐 Principles at Play
The design theory behind my feedback. Educational component—teach, don't just critique. Connects their work to broader design knowledge.
🚀 Suggested Next Steps
Specific, actionable improvements. Prioritized (do this first, then this). Realistic for their skill level.
Most AI feedback tools use one generic agent. I built 5 specialized agents:
Design decision: Each agent specializes in their domain, but they all speak in the same designer-to-designer voice—honest, direct, actionable. No fluff. Just what you need to fix.
The product promise is simple: designers submit their work, get context about what they're worried about, and receive honest feedback in minutes—not days. Behind that promise is a multi-stage AI system that analyzes actual design files and delivers feedback that actually helps.
The core architecture principle: Designer submits work → System routes to specialized agent → Claude Vision analyzes actual design files → Agent generates hyper-relevant feedback in consistent voice → Designer gets actionable critique
Formidable handles file uploads, validates file types (images, PDFs, Figma links), converts PDFs to images, stores in Supabase.
CritMachine analyzes submission type (portfolio? UI design? UX flow?) and routes to appropriate specialized agent.
Claude AI Vision analyzes actual design files—identifying visual hierarchy problems, UX issues, accessibility concerns. Not just reading descriptions—actually seeing the work.
Specialized agent generates feedback using 847-word prompt embedding my teaching philosophy. Designer-to-designer voice maintained across all agents.
Critique saved to Supabase, status updated with real-time polling, results delivered to user interface with cache-busting to ensure fresh data.
The entire system runs without manual intervention—from file upload to specialized critique delivery—while maintaining quality and voice consistency through the 847-word prompt that embeds my teaching expertise.
The Crit isn't just a critique tool. It's a design education platform. I created 50+ resource pages that actually help designers—portfolio guides, design principles, tool reviews. Not SEO fluff. Real content that makes designers better.
50+ Educational Resource Pages covering:
752k+ Monthly Searches: Programmatic templates targeting specific keywords while providing genuine value. SEO-optimized but education-first.
Built an AI-powered system that automatically responds to design feedback requests on Reddit. Same multi-agent system, same honest voice, but helping designers where they're already asking for help.
How the Reddit automation works:
→ Monitors r/graphic_design, r/design_critiques, r/UI_Design for feedback requests
→ Analyzes submitted URLs using the same multi-agent system
→ Generates critique in designer-to-designer voice
→ Formats response for Reddit (respects character limits, proper formatting)
→ Posts helpful, actionable feedback where designers are already looking
Design decision: Meeting users where they are, not forcing them to come to you. Same critique quality, delivered on the platforms designers already use.
Why I started here: Wanted to move fast, test the concept. Got a prototype running in days.
What didn't work: Hit credit limits, couldn't customize feedback quality. Spent more time fighting the tools than improving the product.
Result: Validated concept, learned no-code tools weren't enough for this problem.
What happened: Switched to n8n + Claude API for more control over AI prompting. Rebuilt everything. Time cost: 3 weeks.
Result: Feedback quality jumped from 6/10 → 8.5/10. Worth it for quality, but more technical overhead.
The UX wake-up call: Google Form submission = not great for a design tool. Built proper frontend with v0 + Cursor. Designers care about UX—the submission experience had to match the critique quality.
Result: Production-grade platform. Multi-step submission flow, real-time status updates, visual analysis pipeline, cache-busting strategies.
What happened: Critiques finished successfully, but the API kept saying "processing" for 3+ minutes. Users saw stale data even though fresh critiques were sitting in the database. Trust = destroyed.
Before: 3+ min stale status
After: Real-time live updates
Solution: Multi-strategy cache-busting—Next.js route configuration (dynamic = 'force-dynamic'), timestamp-based Supabase headers, parallel multi-query approach, extended timeout thresholds (3min → 10min).
What happened: Analyzing multiple images takes 5-10 minutes. Vercel's default timeout is 5 minutes. Complex critiques were dying mid-generation. Users got nothing.
Before: 5 min timeout limit
After: 10 min extended limit with dynamic thresholds
Solution: Dynamic timeout thresholds based on submission complexity (single image = 60s, 2 images = 90s, 3+ images = 180s), maximum absolute timeout of 10 minutes, intelligent retry logic.
What happened: Claude update changed output style. Spent 48 hours reworking prompt. Early versions gave generic AI feedback. "Your design looks good! Consider improving spacing." Useless.
Solution: Built prompt drift detection into workflow. Now: 847 words of carefully crafted context that maintains consistent designer-to-designer voice even when AI models update. The prompt IS the product.
I don't separate design from engineering. As a design engineer and professor, I bridge design education and code—15 years of teaching expertise drives the AI architecture, and design systems are built in code from the start. This project demonstrates how domain expertise (knowing what designers actually need) compounds with technical execution (multi-agent AI, visual analysis, real-time systems) to create a platform that actually helps people improve.
15 years teaching critique shaped the 847-word prompt. The Crit Sheet format mirrors how I run in-person critiques. The specialized agents reflect how I adjust feedback based on what students submit.
One generic agent → mediocre feedback. 5 specialized agents → hyper-relevant critique. Portfolio submissions get portfolio-focused advice. UI designs get interface-specific feedback.
The prompt IS the product. 847 words of carefully crafted context makes the difference between generic AI ("improve your design") and mentor-quality feedback ("your hero section takes 80% of the screen while your value proposition is buried—here's why and how to fix it").
Critique alone isn't enough. 50+ resource pages provide the foundational knowledge designers need. Reddit automation meets users where they already are.
3 full rebuilds to get quality right. Relevance AI → n8n + Claude → Next.js + Anthropic SDK. Each iteration improved feedback quality: 6/10 → 8.5/10 → production-grade.
Most tools use default Tailwind and call it a day. I built The Crit's design system to signal designer credibility—warm orange/purple gradients that feel creative but professional, typography that's readable but has personality, components that feel consistent without being boring.
Why this matters: For a design feedback platform, the design system IS the product credibility. Visual design communicates "this was built by someone who gets it" before users submit their first design.
Inspired by designer workspaces and creative energy:
Design decision: Orange signals warmth and approachability (not intimidating like corporate blue). Purple adds creative sophistication. Together they communicate "professional designer space" not "generic SaaS tool."
Fraunces (Display) — Weights: 400, 600, 700
Used for headings and hero text. Adds personality and visual interest while maintaining readability.
Inter (Sans-Serif) — Weights: 400, 500, 600, 700
Primary font for body text, buttons, and UI elements. Chosen for exceptional readability and professional appearance.
Built a complete component library with consistent patterns:
Generic AI feedback is worse than no feedback. Trust destroyed instantly when feedback sounds robotic. The 847-word prompt maintains consistent designer-to-designer voice.
One agent giving portfolio advice for UI designs = mediocre. Specialized agents for each domain = hyper-relevant feedback. The routing system makes all the difference.
Spent 48 hours reworking the prompt when Claude updated. The prompt IS the product. 847 words of carefully crafted context maintains voice consistency across AI model updates.
Critiques finishing but API saying "processing" for 3+ minutes destroyed trust. Multi-strategy cache-busting (Next.js config, timestamp headers, parallel queries) solved it. Infrastructure reliability IS product quality.
"Please be harsh" is common feedback request. Designers want real critique, not validation. They know something's off—they just can't see what. Honest feedback beats false confidence.
3 full rebuilds: Relevance AI → n8n + Claude → Next.js + Anthropic SDK. Feedback quality: 6/10 → 8.5/10 → production-grade. Each iteration was worth it.
The Crit is live. Submit your portfolio or design work and see what honest, actionable feedback actually looks like. Or browse the 50+ resource pages I built to help designers improve.
Want to talk about this project? hello@nikkikipple.com