This interview is with David Moosmann, Founder, LearnClash.
What was the single most decisive step that took you from microbiologist to solo founder shipping an AI quiz platform on iOS and Android?
Honestly, the decisive step wasn’t learning to code. It was deciding I didn’t need to.
I have a Master’s in Microbiology. I spent years in labs, not in front of an IDE. When I decided to build LearnClash, the traditional path would have been to either spend years learning software engineering or find a technical co-founder. I did neither.
Instead, I went all in on AI-assisted development. I used Claude Code to build the entire product: a Flutter app on iOS and Android, a Firebase backend, AI-powered question generation, spaced repetition algorithms, ELO ranking, and more. I never read the code. What I did was use AI as a tutor to learn the architecture deeply enough that I could direct it, catch its mistakes, and make the right product decisions.
That shift in mindset was the decisive step. I stopped thinking “I need to become an engineer” and started thinking “I need to become the best person at directing AI to build what I want.” My science background actually helps here. Running experiments, forming hypotheses, iterating on results: that’s the same loop whether you’re in a microbiology lab or shipping software.
The tools exist now for domain experts to build real products without writing code themselves. You need taste, you need to understand your users deeply, and you need relentless iteration. Those are founder skills, not engineering skills. The engineering part, I delegate to AI.
What took me from microbiologist to solo founder wasn’t a single technical skill. It was the conviction that understanding the problem matters more than understanding the syntax.
What specific gap in traditional quiz apps convinced you to pair competitive 1v1 duels with spaced repetition?
The gap is simple: quiz apps entertain, but they don’t teach. And learning apps teach, but they’re boring.
I played QuizDuel every day for 12 years with my mum. Twelve years. Thousands of rounds. And after all that time, I couldn’t tell you a single thing I’d learned. The questions were random trivia designed to stump you, not to build knowledge. You’d get a question wrong, never see it again, and move on. There was no system for retention, no progression on a topic, and nothing connecting one round to the next.
On the other side, you have apps like Anki or pure flashcard tools. They’re built on solid learning science, and spaced repetition works. But they feel like homework. There’s no motivation loop, no social pressure, and no reason to come back tomorrow beyond discipline.
The gap was obvious: nobody had combined the elements that make quiz games addictive (competition, ELO rankings, challenging someone you know) with the factors that make learning actually stick (spaced repetition, topic mastery, progressive difficulty).
In LearnClash, every question you answer feeds into a spaced repetition cycle. Get it wrong, and it comes back sooner. Get it right repeatedly, and it progresses from Learning to Known to Mastered. But you experience this through duels, not flashcards. You’re competing against a real person, climbing ELO tiers, and picking topics you care about. The learning happens because the game demands it, not because you’re forcing yourself through a study session.
Competition is the motivation engine. Spaced repetition is the learning engine. Neither works as well alone.
How did you architect real-time head-to-head matches to keep them both low-latency and fair at scale?
I made an early decision that changed everything: I didn’t build real-time matches at all.
LearnClash uses asynchronous, turn-based duels. You start a duel, pick your topic, answer your questions, and your opponent gets 48 hours to play their turn. It’s more like chess by mail than a live game show.
This was a deliberate architectural choice, not a limitation. Real-time multiplayer requires WebSocket infrastructure, matchmaking queues, disconnect handling, latency compensation, and anti-cheat systems that punish players for having a slow connection. For a solo founder, that’s a massive engineering burden. More importantly, it’s the wrong design for learning.
Learning benefits from low-pressure environments. When you’re racing a live clock against someone watching you in real time, you guess instead of think. You optimize for speed, not retention. By making duels asynchronous, players can take their time, actually consider questions, and engage the retrieval process that makes spaced repetition work.
The architecture is straightforward: Firebase Firestore for game state, Cloud Functions for validation and scoring, and push notifications to alert players when it’s their turn. ELO calculations happen server-side after both players complete their turns, so there’s no way to manipulate the outcome. Fairness comes from the fact that both players answer the same questions independently, with no information leakage between turns.
The result is an architecture that scales naturally (no persistent connections, no matchmaking servers), stays fair by design (no latency advantages, no disconnection exploits), and actually serves the learning mission better than real-time ever could.
What end-to-end process do you use to generate and validate AI-written questions in real time so accuracy and difficulty stay on target?
The generation isn’t real-time in the traditional sense. Questions are generated on demand when a player picks a topic, then cached and reused. However, the pipeline from “user types a topic” to “validated questions ready to play” is fast enough to feel instant.
It starts with Gemini Flash. When someone creates a topic, we send a structured prompt that specifies the subject, difficulty distribution, and format constraints. The AI returns questions with four answer options, a correct answer, and a short explanation. However, raw AI output is never trusted directly.
Every generated question goes through a multi-stage validation pipeline. First, structural validation: does it have exactly four options, is there one correct answer, is the explanation present, and are there duplicates? Questions that fail any structural check get rejected and regenerated.
Second, a separate AI pass reviews each question for factual accuracy. This involves a different prompt with a different role: act as a fact-checker, flagging anything uncertain or outdated. Questions flagged as potentially inaccurate get dropped.
Third, difficulty calibration. We don’t just label questions easy, medium, or hard based on the AI’s guess. Over time, actual player performance data feeds back into difficulty scores through our spaced repetition system. A question that 90% of players answer correctly drifts toward “easy” regardless of how it was originally labeled. This means difficulty accuracy improves with every duel played.
The result is a system where AI generates the raw material, while validation, fact-checking, and calibration happen in layers. There is no single point of failure. Players occasionally report bad questions too, and those reports feed directly into the review queue.
Quality at scale requires assuming the AI will be wrong sometimes and building systems that catch its errors.
Which metric most reliably predicts durable learning in LearnClash?
Mastery rate per topic. Not time spent, not questions answered, not even win rate.
We track every question through three stages: Learning, Known, and Mastered. A question moves from Learning to Known after you answer it correctly across multiple spaced intervals. It reaches Mastered only after you’ve proven you can recall it consistently over longer gaps. That progression is driven by spaced repetition, the most well-validated technique in learning science.
The metric that predicts durable learning is the percentage of questions in a topic that reach the Mastered stage. When someone has 80% or more of a topic in Mastered, they genuinely know that material. We’ve seen this hold up because the spacing intervals are long enough that you can’t brute-force it. You can’t cram your way to Mastered; it takes days of correctly recalling information at increasing intervals.
What surprised me is how poorly other metrics correlate. Time spent in the app tells you almost nothing. Someone can spend 30 minutes and learn nothing if they’re just guessing. Win rate in duels is misleading too, because you can win duels on topics you already know while ignoring weak spots.
The mastery rate works because it’s the only metric that directly measures retrieval success over time, which is what learning actually is. It’s also what separates LearnClash from traditional quiz apps. In most trivia games, there’s no concept of mastery at all. You answer a question, and it’s gone. In LearnClash, every question you encounter becomes part of a system that tracks whether you actually retained it.
That’s also why we built practice mode around this metric. Players can see exactly which questions are still in the Learning stage and drill them on their own schedule. The duel is the engagement hook, but mastery rate is the learning signal.
How do you design monetization so competition remains motivating without pay-to-win dynamics or undermining learning?
The rule is simple: nothing you pay for can affect the outcome of a duel.
Free players and premium players answer the same questions, get the same ELO rating, and compete on completely equal terms. There is no way to buy hints during a duel, no way to buy extra time, and no way to purchase a competitive advantage. If you win, it’s because you knew more. That’s it.
We also made a decision early on that LearnClash would have zero ads. Not “ads on the free tier, ad-free on premium.” Zero ads, period. Most quiz apps are borderline unplayable on their free tiers because of interstitial ads between every round. That model directly undermines learning because it breaks concentration and turns the experience into an ad-delivery mechanism with some trivia attached.
Instead, our pricing model is subscription-based. Premium unlocks convenience and depth features: unlimited practice mode (spaced repetition on your own schedule), unlimited topic creation, unlimited AI chat for explanations, advanced statistics, and cosmetic rewards. None of those affect duel outcomes.
The free tier is generous on purpose. You get unlimited duels, full ELO rankings across all eight tiers, spaced repetition (once per day), and one topic creation per day. A free player can absolutely compete at the highest level and learn effectively. Premium just removes friction for people who want to go deeper.
This alignment between monetization and learning is critical. The moment paying players have a competitive edge, the ELO system loses meaning. Rankings only matter if they reflect actual knowledge. So we never compromise that, and we never will.
What is the one AI-driven marketing workflow that most improved LearnClash’s growth that another solo founder could replicate this week?
Automated social media content generation, posted on a schedule, with zero daily effort from me.
I built three Cloud Functions that run on cron schedules. Every morning, one generates a “daily quiz card” for a trending topic. Every evening, another posts a topic challenge. Every Monday, a third creates a weekly challenge. Each function uses AI to generate the hook text, create a visual quiz card image, and post directly to six platforms: X, Facebook, Instagram, Threads, Reddit, and Discord.
Here’s what makes it replicable: The entire system costs almost nothing to run. The AI generates two things: a short hook (a surprising fact plus a teaser question) and the quiz card image. The hook format is consistent: an interesting fact about the topic on line one, a teaser on line two. That’s it. No long captions, no hashtag research, no manual design work.
The key insight is that educational content performs well on social media when it’s bite-sized and visual. A quiz card with “Did you know the human brain uses 20% of the body’s energy?” and a multiple-choice question gets engagement because people want to test themselves. They don’t need to download the app to interact with the post, but some do.
Any solo founder could replicate this pattern in a weekend. You need an AI API for text generation, an image generation API for visuals, a scheduling service, and API keys for your social platforms. The whole pipeline is: generate content, create image, post to platforms, repeat on schedule. Once it’s running, it’s hands-off.
The compound effect matters most. Three posts per day across six platforms means consistent presence without me touching anything. Over weeks, that builds familiarity. People see LearnClash quiz cards in their feed repeatedly, and when they’re ready for a trivia app, we’re already familiar.
For an educator or SMB that wants to launch an AI-powered head-to-head quiz experience, what is the leanest first version you’d ship?
Strip it down to three things: a question engine, a matchmaking loop, and a scoreboard. Everything else can wait.
For the question engine, use any LLM API. Send it a topic, ask for six multiple-choice questions with four options each, a correct answer, and a brief explanation. Add basic validation: check for exactly four options, one correct answer, and no duplicates. That’s your content layer. Don’t build a question bank. Don’t curate manually. Let the AI generate on demand and validate programmatically.
For matchmaking, start with invite links. Skip algorithmic matchmaking entirely. Let users share a link; the other person joins, and both answer the same six questions independently. Compare scores. That’s a duel. You can build this with a single database document per match and a Cloud Function that scores when both players submit.
For the scoreboard, track wins and losses. Don’t implement ELO yet. A simple win/loss ratio is enough to keep people competitive in version one. ELO, ranked tiers, and seasonal resets are version two problems.
What to leave out: spaced repetition, topic creation, AI chat, practice mode, social features, cosmetics, and subscriptions. All of those are valuable (we built all of them into LearnClash), but none of them are needed to validate whether people want to compete on knowledge.
The leanest version is a web app. Use Flutter or React, with Firebase for authentication and the database, and one Cloud Function for scoring. You could ship this in a weekend using AI-assisted development. I know because that’s roughly how LearnClash evolved from a simple QuizDuel alternative into what it is today.
Ship the duel loop first. If people play a second round, you have something. If they don’t, no amount of features will fix that.