We check what matters. You fix what's broken and improve your practice.
I'm a full-time design professor and serial problem solver. I built The Crit because I was tired of seeing designers get generic "looks good" feedback when they needed honest, actionable critique. After 15 years teaching critique, I know what actually helps designers improve—and it's not what most AI tools are delivering.
This isn't another portfolio review service. The Crit uses specialized AI agents that analyze your actual design files, identify real skill gaps, and deliver feedback in a designer-to-designer voice. It's my teaching philosophy distilled into AI form—847 words of carefully crafted context about design education, critique methodology, and brand voice.
Clean, focused interface that gets out of the way. Designers submit their work, get context about what they're worried about, and receive honest feedback in minutes—not days.
The Crit runs on a multi-agent AI system that routes critiques to specialized agents based on what you're submitting. Portfolios get portfolio-focused feedback. UI designs get interface-specific critique. Each agent speaks in the same designer-to-designer voice—honest, direct, actionable.
Before building, I spent months digging through online design communities. I'm a professor—I teach the exact people I'm building for every single day. But I needed to understand what happens after graduation.
The Reddit rabbit hole: r/graphic_design (2.8M members, 50+ critique requests per day), r/design_critiques (109K members craving useful feedback), r/UI_Design (200K asking "how do I improve?"). What I saw again and again: "Please be harsh" (they want real critique, not validation), "I'm self-taught" (no formal training in feedback), "I've been staring at this for hours" (blind spots they can't see anymore).
The post-pandemic shift: Design programs moved online permanently. Class sizes ballooned. One-on-one critique turned into breakout rooms. Studio culture vanished. Meanwhile, AI tools are making everyone a "designer"—but fewer people know how to design well. The tools got democratized, but the education didn't.
Why I'm uniquely positioned: 15 years teaching critique. Academic + industry fluency. AI early adopter (testing GPT for feedback since 2020). I bridge theory and practice, and I know what actually helps designers improve.
I kept seeing the same thing: designers posting portfolios asking for feedback, getting "looks great!" responses, then wondering why they're not getting interviews. The problem isn't that their work is bad— it's that they can't see their blind spots anymore, and nobody's telling them what actually needs fixing.
The paradox: Right when designers need the most support to level up and differentiate themselves from AI-generated mediocrity, they have the least access to quality feedback. AI democratized the tools, but it actually made expertise more valuable, not less. The difference between a good prompt and a great prompt is 15 years of design education.
My insight: Designers don't need another "portfolio review service" that charges $200 and takes a week. They need instant, honest feedback from someone who's been there—someone who can see what they can't see anymore and tell them exactly what to fix. They need critique that teaches, not just critiques.
Drop your design. Tell us what you're worried about. We'll find the issues in about 3 minutes. Simple as that. Here's the step-by-step process:
Upload your portfolio, case study, or design file. Tell us what you're worried about. We need context, not just pretty pictures.
Our system figures out what you're submitting (portfolio? UI design? UX flow?) and routes it to the right specialized agent.
Claude Vision analyzes your actual design files—not just descriptions. It sees visual hierarchy problems, UX issues, accessibility concerns.
In 2-3 minutes, you get a critique in designer-to-designer voice. Specific. Actionable. No fluff. Just what you need to fix.
Apply the feedback. See your skills improve. Submit again when you're ready. Repeat until your work is actually good.
Most tools use default Tailwind and call it a day. I built The Crit's design system to signal designer credibility—warm orange/purple gradients that feel creative but professional, typography that's readable but has personality, components that feel consistent without being boring.
The design system isn't just for developers. It's a product feature. Every button, every card, every gradient tells designers "this was built by someone who gets it."
Inspired by designer workspaces and creative energy:
#FF6D0C Primary Orange •
#DB1AF1 Secondary Purple •
#FFF9F5 Light Orange Background •
#FDF5FE Light Purple Background
Fraunces — Display font for headings and hero text
Inter — UI font for body text, buttons, and interface elements
Three-level hierarchy system for consistent visual organization:
Badges:
Live ✨ PremiumButtons:
Submit Design View Resources💡 Design System in Action: Every component follows the same principles—consistent spacing, orange/purple gradients, readable typography. The system scales from individual buttons to entire page layouts, creating visual harmony that signals designer credibility.
Here's what makes The Crit different: specialized AI agents. Portfolios get portfolio-focused feedback. UI designs get interface-specific critique. Each agent knows their domain, but they all speak in the same designer-to-designer voice.
The prompt engineering challenge: Getting AI to think like a design professor, not a generic feedback bot. I fed it examples of my best student critiques, trained it on design principles (not just "make it better"), and iterated based on user feedback. The current prompt is 847 words of carefully crafted context—basically my teaching philosophy in AI form.
Knows portfolio structure, case study presentation, project selection. Tells you what hiring managers actually care about.
Sees visual hierarchy problems, spacing issues, typography mistakes. The stuff that makes interfaces feel "off."
Analyzes user flows, information architecture, interaction patterns. Finds the UX problems you can't see anymore.
Catches WCAG violations, color contrast issues, keyboard navigation problems. The stuff that excludes users.
Reviews fundamental design principles—balance, composition, visual weight. The theory that makes good design work.
This isn't random—it's based on how I actually run critiques in class. Every critique follows this structure:
The one thing that will unlock their next level. Not obvious feedback, but the deeper pattern they're missing. Often connects to design principles they haven't internalized yet.
What's working (build confidence first), what's not working (honest but constructive), how it fits their stated goals.
The design theory behind my feedback. Educational component—teach, don't just critique. Connects their work to broader design knowledge.
Specific, actionable improvements. Prioritized (do this first, then this). Realistic for their skill level.
Every critique follows the structure above, but here's what makes it different: it's specific, actionable, and educational. Not "improve your design" but "your hero section takes 80% of the screen while your value proposition is buried in a tiny text blob—here's why that's a problem and how to fix it."
Example from an actual critique:
🧠 Key Insight: Your portfolio treats every project equally, but hiring managers need to see your best work first. The current structure forces them to dig through 6 projects to find the 2 that actually demonstrate the skills they're hiring for.
📐 Principles at Play: Visual hierarchy isn't just about size—it's about strategic emphasis. Your strongest case study should be positioned where it gets the most attention (above the fold, first in the grid). This is portfolio design 101, but most designers don't realize they're burying their best work.
🚀 Suggested Next Steps: Reorder your projects so your strongest 2-3 case studies appear first. Add a "Featured" section at the top that highlights your best work. Then organize the rest by relevance to the types of roles you're applying for.
💡 Note: This is the kind of specific, actionable feedback designers actually need. Not generic compliments. Not vague suggestions. Real critique that helps them improve.
Building The Crit taught me that AI feedback is only as good as the system behind it. Generic AI responses don't help designers. You need specialized agents, visual analysis, and a voice that actually sounds like a designer talking to another designer.
From file upload to critique delivery in 2-3 minutes. Here's how it actually works:
Formidable handles file uploads, validates file types (images, PDFs), converts PDFs to images, stores in Supabase
Submission analyzed to determine design type, routes to appropriate specialized agent (portfolio, UI, UX, etc.)
Claude AI Vision analyzes actual design files, identifying visual issues, UX problems, accessibility concerns
Specialized agent generates hyper-relevant feedback in designer-to-designer voice, specific and actionable
Critique saved to Supabase, status updated, real-time polling delivers results to user interface
Why I started here: Wanted to move fast, test the concept. Got a prototype running in days.
What didn't work: Hit credit limits, couldn't customize feedback quality. Spent more time fighting the tools than improving the product.
What happened: Switched to n8n + Claude for more control over AI prompting. Rebuilt everything. Time cost: 3 weeks.
Result: Feedback quality jumped from 6/10 → 8.5/10. Worth it for quality, but more technical overhead.
What happened: Critiques finished successfully, but the API kept saying "processing" for 3+ minutes. Users saw stale data even though fresh critiques were sitting in the database. Trust = destroyed.
Solution: Multi-strategy cache-busting—Next.js route configuration (dynamic = 'force-dynamic'),
timestamp-based Supabase headers, parallel multi-query approach, extended timeout thresholds (3min → 10min).
What happened: Analyzing multiple images takes 5-10 minutes. Vercel's default timeout is 5 minutes. Complex critiques were dying mid-generation. Users got nothing.
Solution: Dynamic timeout thresholds based on submission complexity (single image = 60s, 2 images = 90s, 3+ images = 180s), maximum absolute timeout of 10 minutes, intelligent retry logic.
What happened: Google Form submission = not great for a design tool. Built proper frontend with v0 + Cursor. Designers care about UX—the submission experience had to match the critique quality.
What happened: Claude update changed output style. Spent 48 hours reworking prompt. Early versions gave generic AI feedback. "Your design looks good! Consider improving spacing." Useless.
Solution: Built prompt drift detection into workflow. Now: 847 words of carefully crafted context that maintains consistent voice even when AI models update.
Solution: Agent router analyzes submission type → routes to specialized agent → agent provides domain-specific critique → consistent designer-to-designer voice across all agents.
The Crit isn't just a critique tool. It's a design education platform. I created 50+ resource pages that actually help designers—portfolio guides, design principles, tool reviews. Not SEO fluff. Real content that makes designers better.
I also built a Reddit automation system that responds to design feedback requests. Same AI system, same honest voice, but helping designers where they're already asking for help. It's the same critique quality, delivered where designers are already looking for feedback.
The vision: Phase 1 was nailing the critique experience. Phase 2 is adding educational layers (newsletter, resource library). Phase 3 is building community (peer reviews, mentorship). Phase 4 is creating the anti-bootcamp—a scalable, critique-first design school.
Comprehensive guides covering portfolio development, design fundamentals, tools & workflow, career & jobs, and more. Each page targets specific keywords while providing genuine value.
Built an AI-powered system that automatically responds to design feedback requests on Reddit. Uses CritMachine to analyze submitted URLs, routes to specialized agents, and transforms critiques into Reddit-appropriate responses.
How it works: The system monitors r/graphic_design, r/design_critiques, and r/UI_Design for posts requesting feedback. When it finds a submission with a URL, it:
💡 Why this matters: Same critique quality, delivered where designers are already looking. It's about meeting users where they are, not forcing them to come to you.
The Crit demonstrates my ability to bridge professor brain and builder brain—taking 15 years of teaching expertise and turning it into a scalable product. It's not just code. It's a system that actually helps designers improve. From AI systems to design systems to content strategy, I handled every layer.
The ultimate goal: Democratize access to quality design education. Make good critique available to every designer, regardless of their educational background, location, or budget. This is the anti-bootcamp—a scalable, critique-first design school.
Understanding what designers actually need, building a design system that signals credibility, creating UX that doesn't waste time
Building specialized agents, routing critiques intelligently, maintaining consistent voice across all agents
Next.js, TypeScript, Supabase, file handling, real-time status updates, cache-busting strategies
50+ educational resource pages, SEO optimization, Reddit automation, content that actually helps designers
Orange/purple gradient system, typography hierarchy, component library, design principles that scale
Fixing cache issues, optimizing timeouts, building reliable systems, making AI feedback actually useful
The Crit is live. Submit your portfolio or design work and see what honest, actionable feedback actually looks like. Or browse the 50+ resource pages I built to help designers improve.
Want to talk about this project? hello@nikkikipple.com