SAAS

B2B

SimpleClub – Enabling Trainers to Create Personalised Exercises in Minutes

SimpleClub is one of Germany’s leading EdTech platforms helping companies train Ausbilders (trainers) and Azubis (trainees).
As part of its B2B expansion, the company needed to empower trainers to create custom exercises quickly and efficiently for IHK exam preparation — without the repetitive manual work.

This project aimed to design an AI-powered tool that simplifies exercise creation, reduces effort, and enhances consistency across large organizations.

Timeline

Q3 2024

Sprint

3 - 4 sprints

Platform:

Web + Mobile

Team

1 PM, 5 Engineers, AI & Content teams

Role

Senior Product Designer

Context

  • Professional trainers (Ausbilders) needed fast, personalised exercises for their students.They spent 3–4 hours/week creating tasks manually → slow, inconsistent, and not personalised.

  • This process was slow, repetitive, and difficult to personalise for each learner’s needs. The lack of adaptive, data-driven content also contributed to declining engagement: our behavioural analysis showed a 45% drop after the second week. Trainers wanted a way to generate high-quality personalised tasks quickly, without losing control or transparency. Students needed exercises that matched their pace, mistakes, and skill level to stay motivated.

The goal

  • Enable trainers to generate high-quality exercises in minutes using AI — with full control and transparency.

 My Role

  • I led:

    • Problem framing & product strategy

    • UX research (interviews, data analysis)

    • Designing the AI workflow & interaction logic

    • UI for trainer + student sides

    • Collaboration with engineers (AI + FE)

    • Prototype testing & design delivery

The challenge

How might Ausbilders (trainers) create exercises for their Azubis (trainees) in the easiest and most innovative way?

SimpleClub’s B2B motion grew with enterprise clients (e.g., BMW, Bosch, Deutsche Bank). Trainers needed scalable ways to digitize exam prep—fast.

Observed problems (from support tickets, sales notes, and trainer calls):

  1. Existing content creator was built for lessons, not exams/tests.

  2. Trainers were copy-pasting from Word/Drive/Moodle, losing structure and time.

  3. No first-class validation (correct/incorrect answers, points), so quality drifted.

  4. At the same time, the product team faced internal pressure to ship fast — raising the question:
    👉 Are we building the right thing, or just building fast?

Legacy Tool Audit

Before redesigning the experience, I audited the existing lesson creator to understand how trainers currently build and organize content. The tool was optimized for lessons, not structured exams, which made it inefficient for creating validated exercises.

The audit revealed that all content types—lessons, flashcards, and tests—sat on the same hierarchy, and trainers had to manually create every question without difficulty settings or validation logic. This structure worked for learners but not for trainers managing multiple exam topics.

To reimagine this flow, we held a Crazy 8 ideation session developers, and PMs, using the findings from the audit to sketch different solutions direction and feasibility from tech and business. Together, we identified the opportunity to reuse the AI Tutor stack as the foundation for automated exercise creation. Extending the current creator (instead of building a new tool) allowed us to move faster while validating the idea with real users.
Option A — Wizard (step-by-step builder)
  • ✅ Guidance for new users

  • ❌ Too time consuming for power users; too much decision-making early

Option B — Bulk upload/import first
  • ✅ Fits high-volume orgs, doc-heavy reality

  • ❌ Connectors (Drive, Forms) add infra complexity; not MVP-feasible

Option C — AI-First “Copy → Paste → Convert” (Chosen)
  • ✅ Matches real behavior (paste from notes/Word)

  • ✅ Fastest time-to-value; minimal configuration upfront

  • ✅ Reuses AI Tutor pipeline + existing creator canvas

Why it balanced effort/innovation: reusing tech cut dev cost; cognitive load dropped for users (AI handles parsing), while edits preserved control.

Quantitative Research: Survey

To understand how trainers created and managed exercises, we ran both a quantitative survey and qualitative interviews. The goal was to uncover not just how they worked, but why — what tools they trusted, what slowed them down, and what they actually needed from automation.


We ran a Typeform survey and email campaign targeting trainers across B2B clients.
Our goal was to understand:

  • How trainers currently create and manage exercises.

  • What question types they use most frequently.

  • Where their content is stored and how they organize it.

Insights from 100+ responses:

  • 46% managed fewer than 50 questions; 38% handled 100–500.

  • Most common types:

    • 25.6% Free text

    • 25.6% Multiple choice

    • 16.3% Mathematical questions

  • Main storage tools: Google Drive (26.3%), MS Word (26.3%), Moodle (13.2%).

💡 Takeaway: trainers relied heavily on offline or scattered systems and lacked a streamlined digital workflow.

Qualitative Research: Interviews

We followed with remote interviews to explore pain points more deeply. Trainers consistently described time pressure, scattered tools, and the repetitive effort of validating each question manually.Common patterns:

  • Time pressure: most had only 2–4 minutes to create simple question types like MCQs.

  • Fragmented tools: switching between Word, Excel, and LMS platforms.

  • Validation issues: no clear process to check answers or difficulty levels.

One trainer summarized it perfectly:

“It’s not about writing the question — it’s about organizing and validating them without wasting time.”

JTBD statment

When I need to prepare an exam task set, I want to turn my notes or docs into structured exercises in minutes, so I can focus on teaching rather than formatting.

Design Exploration

After the Crazy 8 ideation session, I translated the strongest ideas into quick wireframes to visualize how AI could simplify exercise creation. The concept centered on a Copy → Paste → Convert flow — trainers could paste existing content or upload a document, and AI would instantly turn it into editable exercises.

The design featured a dual-pane layout: raw input on the left and a live task preview on the right. This gave trainers full visibility into what AI generated while keeping the process fast and intuitive. These early wireframes helped us validate the interaction flow and confirm that automation could feel effortless without losing user control.

Collaboration with AI & Engineering

Close collaboration with engineers was essential due to the complexity of AI behaviour. I worked daily with AI and frontend teams to define prompt structures, clarify metadata needs, and map all UI states including loading, retry, fallback, and weak-output scenarios. I designed safe regeneration and inline editing flows to avoid content loss and make the system feel predictable.

This collaboration made the final experience stable, smooth, and transparent for trainers.

Prototype & Testing

designed high-fidelity prototypes using Figma and ran quick usability tests with internal trainers.
Findings:

  • 3/4 ignored “Allocate generated task” → CTA unclear → remove.

  • 4/4 didn’t use difficultyde-scope for MVP.

  • 3/4 asked for points (simple rule) → default 1 point per correct.

Iteration 2: From Confusion to Clarity

We redesigned the creation flow to:

  • Conversion status chip (Converting / Failed / Completed).

  • Primary CTA simplified: Convert → Review → Publish.

  • Keyboard affordances: Enter to add answer, Backspace to remove.

  • Clear destructive actions (trash icon + undo).

  • Let users manually refine converted tasks post-AI generation.

Result: fewer errors, faster task setup, and higher satisfaction scores.

Visual Design : Designing the AI Hero

Since AI was the centerpiece of this feature, we designed a unique, glowing AI icon that symbolized intelligence and creativity. The final selected icon visually communicated “smart assistance” through layered shapes and soft gradients.

Option selected

Micro-Interactions That Build Trust

We designed five icon states using Rive animations to make AI interactions feel alive and trustworthy:

Idle state

Ready state

Button Loading state

success state

Final Design

From Plain Text to Interactive Exercises

Trainers could now transform long Word documents or PDFs into fully interactive tests — directly inside SimpleClub — in seconds.
AI automatically detected question structures, applied formatting, and validated content.

The impact

Shipped MVP as Q1 Beta, achieving strong adoption from B2B clients. Recognized as an “easy-to-build, high-impact” feature within the existing content creation system.

3.5× increase

in usage within the first month post-launch.

90% reduction

in time to create MCQ questions (from several minutes to 15–30 seconds).

What I have learned

This project demonstrated that transparency is one of the strongest drivers of trust in AI-assisted creation. Trainers valued speed and control far more than complex features. Small interaction refinements—such as inline editing or clear fallback states—had a surprisingly large impact on adoption. Building modularity early also made global scaling far easier.

SAAS

B2B

SimpleClub – Enabling Trainers to Create Personalised Exercises in Minutes

SimpleClub is one of Germany’s leading EdTech platforms helping companies train Ausbilders (trainers) and Azubis (trainees).
As part of its B2B expansion, the company needed to empower trainers to create custom exercises quickly and efficiently for IHK exam preparation — without the repetitive manual work.

This project aimed to design an AI-powered tool that simplifies exercise creation, reduces effort, and enhances consistency across large organizations.

Timeline

Q3 2024

Sprint

3 - 4 sprints

Team

1 PM, 5 Engineers, AI & Content teams

Platform:

Web + Mobile

Role

Senior Product Designer

Context

  • Professional trainers (Ausbilders) needed fast, personalised exercises for their students.They spent 3–4 hours/week creating tasks manually → slow, inconsistent, and not personalised.

  • This process was slow, repetitive, and difficult to personalise for each learner’s needs. The lack of adaptive, data-driven content also contributed to declining engagement: our behavioural analysis showed a 45% drop after the second week. Trainers wanted a way to generate high-quality personalised tasks quickly, without losing control or transparency. Students needed exercises that matched their pace, mistakes, and skill level to stay motivated.

The goal

  • Enable trainers to generate high-quality exercises in minutes using AI — with full control and transparency.

My focus was on:

  • I led:

    • Problem framing & product strategy

    • UX research (interviews, data analysis)

    • Designing the AI workflow & interaction logic

    • UI for trainer + student sides

    • Collaboration with engineers (AI + FE)

    • Prototype testing & design delivery

The challenge

How might Ausbilders (trainers) create exercises for their Azubis (trainees) in the easiest and most innovative way?

SimpleClub’s B2B motion grew with enterprise clients (e.g., BMW, Bosch, Deutsche Bank). Trainers needed scalable ways to digitize exam prep—fast.

Observed problems (from support tickets, sales notes, and trainer calls):

  1. Existing content creator was built for lessons, not exams/tests.

  2. Trainers were copy-pasting from Word/Drive/Moodle, losing structure and time.

  3. No first-class validation (correct/incorrect answers, points), so quality drifted.

  4. At the same time, the product team faced internal pressure to ship fast — raising the question:
    👉 Are we building the right thing, or just building fast?ght thing, or just building fast?

Legacy Tool Audit

Before redesigning the experience, I audited the existing lesson creator to understand how trainers currently build and organize content. The tool was optimized for lessons, not structured exams, which made it inefficient for creating validated exercises.

The audit revealed that all content types—lessons, flashcards, and tests—sat on the same hierarchy, and trainers had to manually create every question without difficulty settings or validation logic. This structure worked for learners but not for trainers managing multiple exam topics.

To reimagine this flow, we held a Crazy 8 ideation session developers, and PMs, using the findings from the audit to sketch different solutions direction and feasibility from tech and business. Together, we identified the opportunity to reuse the AI Tutor stack as the foundation for automated exercise creation. Extending the current creator (instead of building a new tool) allowed us to move faster while validating the idea with real users.

Option A — Wizard (step-by-step builder)
  • ✅ Guidance for new users

  • ❌ Too time consuming for power users; too much decision-making early

Option B — Bulk upload/import first
  • ✅ Fits high-volume orgs, doc-heavy reality

  • ❌ Connectors (Drive, Forms) add infra complexity; not MVP-feasible

Option C — AI-First “Copy → Paste → Convert” (Chosen)
  • ✅ Matches real behavior (paste from notes/Word)

  • ✅ Fastest time-to-value; minimal configuration upfront

  • ✅ Reuses AI Tutor pipeline + existing creator canvas

Why it balanced effort/innovation: reusing tech cut dev cost; cognitive load dropped for users (AI handles parsing), while edits preserved control.

Quantitative Research: Survey

We ran a Typeform survey and email campaign targeting trainers across B2B clients.
Our goal was to understand:

  • How trainers currently create and manage exercises.

  • What question types they use most frequently.

  • Where their content is stored and how they organize it.

Insights from 100+ responses:

  • 46% managed fewer than 50 questions; 38% handled 100–500.

  • Most common types:

    • 25.6% Free text

    • 25.6% Multiple choice

    • 16.3% Mathematical questions

  • Main storage tools: Google Drive (26.3%), MS Word (26.3%), Moodle (13.2%).

💡 Takeaway: trainers relied heavily on offline or scattered systems and lacked a streamlined digital workflow.

Qualitative Research: Interviews

We conducted 1:1 video interviews with B2B trainers to dive deeper into their pain points.

Common patterns:

  • Time pressure: most had only 2–4 minutes to create simple question types like MCQs.

  • Fragmented tools: switching between Word, Excel, and LMS platforms.

  • Validation issues: no clear process to check answers or difficulty levels.

One trainer summarized it perfectly:

“It’s not about writing the question — it’s about organizing and validating them without wasting time.”

JTBD


When I need to prepare an exam task set, I want to turn my notes or docs into structured exercises in minutes, so I can focus on teaching rather than formatting.

Design Exploration

After the Crazy 8 ideation session, I translated the strongest ideas into quick wireframes to visualize how AI could simplify exercise creation. The concept centered on a Copy → Paste → Convert flow — trainers could paste existing content or upload a document, and AI would instantly turn it into editable exercises.

The design featured a dual-pane layout: raw input on the left and a live task preview on the right. This gave trainers full visibility into what AI generated while keeping the process fast and intuitive. These early wireframes helped us validate the interaction flow and confirm that automation could feel effortless without losing user control.

Collaboration with AI & Engineering

Close collaboration with engineers was essential due to the complexity of AI behaviour. I worked daily with AI and frontend teams to define prompt structures, clarify metadata needs, and map all UI states including loading, retry, fallback, and weak-output scenarios. I designed safe regeneration and inline editing flows to avoid content loss and make the system feel predictable.

This collaboration made the final experience stable, smooth, and transparent for trainers.

Prototype & Testing

designed high-fidelity prototypes using Figma and ran quick usability tests with internal trainers.
Findings:

  • 3/4 ignored “Allocate generated task” → CTA unclear → remove.

  • 4/4 didn’t use difficultyde-scope for MVP.

  • 3/4 asked for points (simple rule) → default 1 point per correct.

Iteration 2: From Confusion to Clarity

We redesigned the creation flow to:

  • Conversion status chip (Converting / Failed / Completed).

  • Primary CTA simplified: Convert → Review → Publish.

  • Keyboard affordances: Enter to add answer, Backspace to remove.

  • Clear destructive actions (trash icon + undo).

  • Let users manually refine converted tasks post-AI generation.

Result: fewer errors, faster task setup, and higher satisfaction scores.

Visual Design Designing the AI Hero

Since AI was the centerpiece of this feature, we designed a unique, glowing AI icon that symbolized intelligence and creativity.
The final selected icon visually communicated “smart assistance” through layered shapes and soft gradients.

Option selected

Micro-Interactions That Build Trust

We designed five icon states using Rive animations to make AI interactions feel alive and trustworthy:

Idle state

Ready state

Button Loading state

success state

content loading

state

Final Design

Final Design

From Plain Text to Interactive Exercises

Trainers could now transform long Word documents or PDFs into fully interactive tests — directly inside SimpleClub — in seconds.
AI automatically detected question structures, applied formatting, and validated content.

The impact

Shipped MVP as Q1 Beta, achieving strong adoption from B2B clients. Recognized as an “easy-to-build, high-impact” feature within the existing content creation system.

3.5× increase

in usage within the first month post-launch.

90% reduction

in time to create MCQ questions (from several minutes to 15–30 seconds).

What I have learned

This project demonstrated that transparency is one of the strongest drivers of trust in AI-assisted creation. Trainers valued speed and control far more than complex features. Small interaction refinements—such as inline editing or clear fallback states—had a surprisingly large impact on adoption. Building modularity early also made global scaling far easier.

Let's Connect

Let's Grow Together

Feel free to hit me up, I am looking forward to hearing from you

Let's Connect

Let's Grow Together

Feel free to hit me up, I am looking
forward to hearing from you

Let's Connect

Let's Grow Together

Feel free to hit me up, I am looking forward to hearing from you

© 2025 Yasmin Elsammak

© 2025 Yasmin Elsammak

© 2025 Yasmin Elsammak